top of page
The Future of Deregulation and Artificial Intelligence

The Future of Deregulation and Artificial Intelligence

3 July - 3 Sept 2023

Setting the scene

“We do not consider regulation to be a dirty word, but it must be used only where necessary and be implemented in a way that provides the right foundations for our economy to thrive. There is little doubt that governments too often reach for the lever of regulation first, when other ways to improve and safeguard outcomes are available. The result is that businesses face hundreds of new rules being imposed on them every year, and bear costs of familiarisation, legal advice and compliance. These costs are passed on to consumers in the form of higher prices. Further, each of us as consumers lose out when such regulation blocks innovation and competition, increases prices or lowers the quality and choice of goods and services available.” (Smarter regulation to grow the economy, 10 May 2023)

“AI has an incredible potential to transform our lives for the better. But we need to make sure it is developed and used in a way that is safe and secure.” (Rishi Sunak, Prime Minister, 7 June 2023)

“This is the age of artificial intelligence. Whether we know it or not, we all interact with AI every day - whether it’s in our social media feeds and smart speakers, or on our online banking. AI, and the data that fuels our algorithms, help protect us from fraud and diagnose serious illness. And this technology is evolving every day.” (Nadine Dorries MP, Digital, Culture, Media & Sport Secretary, 18 December 2022)

“Enthralled by machines that appear as our friends, fearful of blocking their superhuman speed, and incapable of explaining their new conclusions, humans may develop a reverence for computers that approaches mysticism. The roles of history, morality, justice, and human judgment in such a world are unclear.” (Henry Kissinger, Eric Schmidt & Daniel Huttenlocher, The Age of AI: And our human future, 16 November 2021)

Three years ago, Ipsos MORI conducted a poll of younger Leave voters to determine their attitudes to regulation, deregulation and enforcement. Just as Americans are reported to prefer, in general, more rather than less industry regulation, the UK poll also found that the majority of 18-to-44-year-old Leave voters—even among Conservative-voting respondents—expressed a preference for maintaining or increasing regulations across diverse areas of public life (see Charts 1-3).

The government also recognises that “Regulation in many specific circumstances is necessary to protect consumers and citizens, and uphold standards or indeed catalyse innovation.”1 Nevertheless, it is also “committed to lightening the regulatory burden on businesses and helping to spur economic growth, … unlock investment and boost growth in towns and cities across the UK.” For, it believes that “Now that we have left the EU, the UK can design regulation that unashamedly supports innovation, and promotes the interests of British people and businesses.” That is why, since the UK left the EU, it has revoked or reformed over 1,000 EU laws and it proposes to revoke around 1,100 further pieces of Retained EU law (REUL) through the schedule to the REUL Bill, the Financial Services and Markets Bill and the Procurement Bill. It has also already rolled out the first in a series of deregulation announcements expected this year, focused on delivering benefits to business.

For the purposes of this consultation, we want primarily to focus on a rapidly-developing area—namely, that of Artificial Intelligence (AI). AI can be broadly categorised into two types:

  1. Narrow AI (or weak AI) is designed to perform specific tasks and operate within a limited domain. Examples include voice assistants (like Siri or Alexa), recommendation algorithms (as used by social media platforms and TV streaming services), generative language models (such as ChatGPT) and image recognition systems (as used in self-driving cars).

  2. General AI (or strong AI) is designed to learn and apply knowledge across a wide range of tasks and domains. It aims to replicate or exceed human cognitive abilities, including problem-solving and creativity. While there have been significant advances in AI, the creation of a fully functioning General AI system is yet to be achieved.

The Call for AI Ethics was first signed by Microsoft, IBM, the United Nations Food and Agriculture Organization (FAO), the Italian Government’s Ministry of Innovation and the Pontifical Academy for Life in February 2020 to promote an ethical approach to artificial intelligence. Renewed in January 2023, signatories to the Rome Call—in enterprises, governments and civil society—commit to develop AI that serves humanity as a whole. It consists of six succinct principles:

  1. Transparency: AI systems must be understandable to all.

  2. Inclusion: These systems must not discriminate against anyone because every human being has equal dignity.

  3. Responsibility: There must always be someone who takes responsibility for what a machine does.

  4. Impartiality: AI systems must not follow or create biases.

  5. Reliability: AI must be reliable.

  6. Security and privacy: These systems must be secure and respect the privacy of users.

In March, the Government published a white paper detailing its plans for implementing a pro-innovation approach to AI regulation. Its approach is based on five values-focused principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. Its public consultation closed on 21 June but, for reference, we include its questions as Appendix 1.

The CPF AI Experiment

Earlier this year, the CPF conducted an experiment. A small number of CPF Group Coordinators were invited to evaluate a set of six submissions received in response to our Making The Case for Freedom consultation. The set of six included their own submission. Unknown to them, it also contained a response generated by ChatGPT, the generative language-processing AI model. We were interested in several questions, including:

  • Would Group Coordinators generally agree with each other in their assessments of both their own and the other submissions?

  • Would the Group Coordinators be able to tell that one of the submissions had been generated by a non-human AI tool?

  • How good would the Group Coordinators evaluate the AI submission compared with their own and with the others?

Overall, the Group Coordinators broadly agreed which were the most and the least noteworthy responses. As expected, they also tended to rank their own response as better than others did. None of them spotted the AI odd-one-out, although one noted that it was “perhaps a little too idealistic about Conservatism as the panacea for all ills.” That said, they all ranked it as one of the best—but not the best submission. We can glean some useful ideas from AI, but the best human groups are—at least for the foreseeable future—more insightful, more relevant and more original in their thinking than the best public AI tool.

At times, Groups ask whether we might publish any of the most noteworthy submissions as examples of “best practice” for others to learn from. We have not normally felt able to do so, as it would be inappropriate to distribute more widely what are confidential reports. We have no such reservations, however, about sharing content that has been generated by what is essentially a piece of freely-available computer software, the biases of which reflect both those of its programmers and those of the authors of the content on which the programmers trained their software—that is, 300 billion words “of data obtained from books, webtexts, Wikipedia, articles and other pieces of writing on the internet.”

If your group participated in one or more of the Making The Case for Conservatism consultations, we invite you to carry out the Self-Review Exercise provided in this briefing document—and let us know what you conclude. We would suggest doing this at a separate meeting to your discussion of the questions on The Future of Deregulation & Artificial Intelligence. We hope you find the exercise both interesting and informative!

Questions for discussion

Groups should not feel obliged to discuss all of the following questions. Groups may wish to focus their discussions on the ones that most interest them.

  1. What regulations hold back businesses and should the government consider for reform?

  2. How can we ensure that AI systems are developed and deployed in a manner that aligns with ethical principles and respects societal values?

  3. How can we safeguard individual privacy rights and protect sensitive data in the age of AI?

  4. What strategies can be implemented to reskill or upskill workers whose jobs may be at risk of automation and AI-driven job displacement?

  5. How can we bridge the existing digital divide and ensure fairness and equity in the development and deployment of AI technologies, particularly in relation to access, benefits, and opportunities?

  6. How can we establish mechanisms to hold AI systems and their developers accountable, ensuring transparency in their decision-making processes and data usage?

  7. How can we address the national security implications of AI, including potential vulnerabilities and threats?

  8. How can we foster international cooperation and establish global norms, standards, and frameworks for AI development, deployment, and regulation?

  9. Is there any other observation you would like to make?

For further details, download the consultation brief.

The Future of Deregulation and Artificial Intelligence
The Future of Deregulation and Artificial Intelligence
bottom of page