Ahead of this week’s AI Safety Summit organised by the UK Government at Bletchley Park, SME canvassed the opinion of experts in that field. Here is what they said:
Natalie Cramp, CEO at data company Profusion, said: “The AI Safety Summit is a very welcome initiative and it has the potential to be a very productive event, however, it really should just be the start of ongoing serious debate in the UK about how we want AI to develop. The UK has fallen far behind the US and EU in terms of debating concrete legislation – with the EU likely to finalise its AI Act in the coming months. At the moment the UK seems to be in a limbo – on one hand announcing a raft of initiatives like the Safety Summit, then at the same time, quietly disbanding its data ethics advisory board and failing to take forward any substantive standalone AI legislation.
“This past year has shown us how rapidly AI can develop, as well as its potentially far-reaching risks and benefits. It’s critical that we move forward with putting adequate rules in place now to reduce the risk of AI getting out of control. We saw the damage that has been done through lax regulation of social media – it’s very hard to put the genie back in the bottle. If the UK Government is serious about using AI to drive forward an economic revolution, businesses, innovators and investors need certainty about what the rules of the game will be. Otherwise, the most exciting AI tech startups will simply go to the EU or US where there is likely to be much more legal clarity.
“Legislating AI is very complex, not least because it is developing so quickly and in unpredictable ways. That is why time is of the essence, if the UK doesn’t start moving on thoroughly debating and scrutinising legislation it runs the risks of rushing to catch up and creating a set of rules that either stifle innovation or fail to protect the public. At the moment, because the UK hasn’t yet created a coherent approach to AI, it will most likely end up following much of the rules set down by the EU. In short, the UK will become a follower rather than an AI leader.”
Sarah Gilchriest, Chief People Officer, Workforce Learning, the group that encompasses QA, Circus Street and Cloud Academy, said: “The AI Safety Summit presents a golden opportunity for the UK to play a leading global role in AI regulation. It’s important to view policy around AI as much about curtailing risk and undesirable outcomes as it is about realising its potential. It’s disappointing that largely absent from this debate is the critical role reskilling is going to play over the next decade. We need to transform the UK’s workforce to be more technically skilled with a particular emphasis on data literacy. Not only will this be the key to businesses effectively adopting AI tools and platforms, it’s also incredibly important for future-proofing careers. When the UK moved from heavy manufacturing towards a service economy many workers were left in professions that no longer existed. There was little to no help for these workers to reskill into new careers. The unemployment this caused blighted scores of communities and caused much of the economic divide we still feel the effects of today. If the UK is to avoid repeating this mistake, the Government must make reskilling a top priority.
“Any policy approach towards AI must lay out how the Government intends to facilitate a nation-wide training scheme across the entire workforce. The best approach would be to work with the UK’s world-leading training industry as well as academic institutions to develop a network of low-cost upskilling hubs that focus on core data and digital skills. The truth is that success of UK governments have often talked about the importance of training but there has been little concerted action, let alone significant funding to make training accessible to all. This is despite the UK having a well-publicised tech skills gap for more than a decade, and plenty of research indicating that upskilling could be instrumental in tackling the UK’s productivity problem. My hope is that the thread and opportunity of AI will finally focus minds towards making upskilling a national priority. We may not have seen this happen at the AI Safety Summit but there is still time for the Government to solve this problem.”
Dominik Angerer, CEO of enterprise CMS Storyblok, said: “The UK’s AI Safety Summit presents one of the first major opportunities for in-depth conversations on how AI regulation should be approached in an international context. While discussing the existential threats posed by AI is important, we cannot let it distract from more immediate concerns around how AI is being used in the short term. Generative AI in particular has thrown up a lot of legal and ethical questions around content generation. This has profound implications for how marketers, businesses and content creators approach using generative AI applications. On one hand, generative AI could unleash unparalleled creativity and efficiency in producing everything from video, art and music to marketing copy, imagery and virtual experiences. On the other hand, without sufficient regulations and controls, it could severely damage the livelihood of content creators, be used for intrusive personalised marketing, copyright theft and misinformation.
“We have already seen legal action being taken in a number of countries around ownership of the outputs of LLMs, as well as libel action for generative AI-produced copy that is said to be false and damaging. Without a global approach to setting up guardrails around generative AI and answering these critical questions around content ownership and liability for the outputs of LLMs, it is very difficult to see how generative AI can develop in a sustainable manner. Businesses are unlikely to seriously invest in using generative AI for their marketing or other consumer facing purposes without legal clarity. Similarly, if the compliance burden around using generative AI varies dramatically between countries, the efficiencies it provides will be diminished to the point that few large organisations will see the point in using it.
“I would like to see events like the AI Safety Summit focus on these tangible problems first. We can then use this as a foundation to develop more holistic global AI regulations that look at the long term major threats and opportunities that surround AI.”