The future of the left since 1884

Fair and equal

Artificial intelligence could help power Britain's growth - but only if it is well-regulated, argues Reema Patel

Share

Opinion

Artificial intelligence has the potential to transform our economy and society. A recent sector study led by Perspective Economics for the UK government has found that there are more than 3,000 AI companies in the UK, generating more than £10bn in revenues, employing more than 60,000 people in AI-related roles, and contributing £5.8bn in gross value added (GVA). In our public services, AI could help increase efficiency and improve decision-making, including in healthcare, where AI-powered diagnostics tools can help with early disease detection and the effective allocation of healthcare resources.

Naturally, there are challenges as well as opportunities. If Britain is to become more productive and maintain international competitiveness, we will likely need a workforce with the skills to develop, adopt, use and diffuse AI. Yet we are already beginning to see stark AI inequalities emerge, with the golden triangle in London, Cambridge, and Oxford leading the way in ‘AI skills clusters’ and lower levels of investment, uptake and skills in the rest of the country. In the same way that we plan for a just climate transition, we need to consider what a just AI transition for workers and workforces might look like so that no workforce, region, industry or sector is left behind. This will require large- scale investment in AI skills development programmes, particularly for workers in those industries at risk of displacement by automation.

But there are more insidious problems to overcome, too. The arrival of ChatGPT marked a turning point in the development of generative AI – which facilitates the creation of content and ideas – with significant implications for Britain’s creative, writing and research industries. Generative AI, which is largely trained on datasets scraped from the web, risks undermining the rights and livelihoods of the very creators it relies upon for training data. Generative AI also creates fertile ground for the development of deepfakes and misleading content, with particular risks for the quality and accuracy of information in science, health and politics. There are longstanding concerns around privacy and data security, too, since AI systems often rely on large volumes of personal data. The recent claims that race scientists accessed UK Biobank data, for instance, illustrate the lack of safeguards when it comes to our data and therefore the AI trained on data sets. This is all to say that if left unregulated, or even under regulated, AI is not without its fair share of risks and challenges.

AI tools and technologies can also obscure difficult choices that should be made by politicians and policymakers. There is a risk of deference towards technologies that are being used to make social and political choices about our society – from the question of who should receive a vaccine, to the question of how best to award an examination score.

In many different areas, existing biases can reinforce and ‘automate’ inequality – for instance, through the use of facial recognition and other biometric technologies that disproportionately target underrepresented and low-income communities; or through the use of healthcare technologies that disproportionately affect minoritised and racialised communities due to a lack of access to relevant healthcare data (used to train the models). Bias and discrimination remain problematic, with evidence that certain algorithms reproduce or even amplify social biases present in their training data. The digital and AI divide risks replicating existing social inequalities and even reinforcing them further given how AI is used as a tool to structure and categorise different parts of society. The implications for civil liberties are profound. How can we get the balance right – securing proportionate data use, management and collection to improve outcomes, while preventing the outright societal surveillance of all groups for all purposes?

In the context of public services, there is growing concern about the increasing power and agency which large, unaccountable technology companies can exert over public sector decision-making through the development and use of technology. This emerging AI landscape, fraught with risk as well as possibility, highlights the importance of government investment in responsible AI development and, above all, in regulation to realise AI’s benefits while protecting individuals and society. It will be important to, as science, research and innovation minister Patrick Vallance indicated, ‘regulate to innovate’. We need to balance regulation with innovation to create a safe, trustworthy and productive AI ecosystem. The UK has, in world leading institutes such as the Alan Turing Institute, Ada Lovelace Institute, the AI Safety Institute and pioneers such as the NHS AI Lab, everything it needs to position itself as a leader in responsible AI, supporting long-term productivity while upholding public trust. We just need the governance to help create the right enabling conditions for what Professor Mariana Mazzucato has described as ‘mission-oriented innovation’ in AI development.

What might some of those enabling conditions look like? The forthcoming AI bill has the potential to position responsible AI governance as essential for both ethical integrity and economic progress in the UK. It marks an opportunity to strengthen parliament’s role in ensuring that the development of AI aligns with broader public interest goals, such as equity, economic progress and public service. The bill could also help establish new standards and expectations for the development, use and procurement of AI; to develop protections for communities disproportionately or unequally impacted by AI; and to develop robust quality assurance processes for both data use and collection, as well as the outcomes that emerge from AI tools and technologies. International frameworks such as the EU’s AI Act can provide inspiration.

One critical lever currently lacking in existing frameworks, including in the EU, is participatory engagement with the communities most likely to be either benefiting from or being impacted by AI tools and technologies. This matters, if the large-scale benefits AI can bring are to be underpinned by public confidence, legitimacy and trust, rather than, as is currently often the case, resistance to new technologies. We do not have to look too far for examples of innovation here in the UK. A wide range of participatory initiatives, such as citizen juries, worker observatories, citizen assemblies and co-design workshops, are already being supported by initiatives such as the Digital Good Network, Connected by Data, and Responsible AI UK. Such initiatives are promoting citizen voices in shaping AI policies, use and development and creating new models for government and AI developers to engage with impacted communities. The new government could formalise its commitment to such approaches in the AI Bill, and could consider a new duty to involve and consult people likely to be affected by AI tools and technologies.

To conclude, a just AI transition for all demands a proactive approach to governance that maintains a fine balance between appreciating the economic and social benefits of AI and guarding against the inherent risks. The government has a unique opportunity with the AI bill to set standards that not only foster innovation but also prioritise the interests and needs of people impacted by AI, equity, and public safety. Learning from international frameworks, such as the EU’s AI Act, and championing participatory approaches will be essential to creating a UK AI ecosystem that reflects public values and upholds trust. The new AI bill marks an opportunity to set the right regulatory conditions in place. Can the government make the most of it?

Reema Patel

Reema Patel is a researcher with expertise in participatory methods, leading Elgon Social. She is based at the Digital Good Network, and an associate fellow of the Leverhulme Centre for the Future of Intelligence. She previously co-founded the Ada Lovelace Ins

@ReemaSPatel

Fabian membership

Join the Fabian Society today and help shape the future of the left

You’ll receive the quarterly Fabian Review and at least four reports or pamphlets each year sent to your door

Be a part of the debate at Fabian conferences and events and join one of our network of local Fabian societies

Join the Fabian Society
Fabian Society

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close