This site uses cookies.

What if they created the Artificial Intelligence (Ethics) Act 2023 - Dr Mark Burgin

10/07/23. Dr Mark Burgin imagines what the government might be planning in response to the threats that recent developments in AI and what effect it will have on the legal profession.

The impact of AI advances on the legal profession is still being debated. Some experts believe that AI will lead to the decline of the legal profession, as AI systems become capable of handling more and more legal tasks. Others believe that AI will actually create new opportunities for lawyers, as they will be needed to oversee and manage AI systems.

One of the most important tasks for governments is to ensure that AI is used in a safe and ethical manner. This means developing regulations that protect people from the potential risks of AI, such as job displacement, discrimination, and bias. It also means developing guidelines for the development and use of AI that are based on human values.

As AI becomes more capable of making decisions that have legal implications, it is likely to raise new questions about liability and responsibility. As a result, lawyers will need to be prepared to advise clients on these new legal challenges. One of these new legal challenges would be a legal statute to manage these risks.

Artificial Intelligence (Ethics) Act 2023

An Act to establish ethical guidelines for the development and use of artificial intelligence, to require AI systems to be transparent and accountable, to prohibit the use of AI for harmful purposes, to explain safety standards for AI systems, to provide compensation for people who are harmed by AI systems, to regulate the use of AI in sensitive areas, such as healthcare and law enforcement, and to provide for taxation of AI based upon job displacement and new forms of inequality.

Be it enacted by the King’s most Excellent Majesty, by and with the advice and consent of the Lords Spiritual and Temporal, and Commons, in this present Parliament assembled, and by the authority of the same, as follows:

1.Interpretation

In this Act—

"AI system" means any system that uses artificial intelligence to make decisions or to perform tasks.

"ethical guidelines" means guidelines that are developed in accordance with section 2;

"harmful purpose" means any purpose that is likely to cause harm to people or the environment;

"safety standards" means standards that are developed in accordance with section 3;

"Sensitive area" means an area of activity that is considered to be particularly important to society, such as healthcare and law enforcement.

2.Ethical guidelines

(1) The Secretary of State must, by regulations, make ethical guidelines for the development and use of AI.

(2) The ethical guidelines must include, but are not limited to, the following:

A requirement that AI systems be designed to be transparent and accountable.

A prohibition on the use of AI for harmful purposes, such as to manipulate people or to cause harm to individuals or society.

A requirement that AI systems be designed to be safe and to meet appropriate safety standards.

A requirement that AI systems be designed to be fair and to avoid discrimination.

A requirement that AI systems be designed to be privacy-preserving.

A requirement that AI systems be designed to be socially beneficial.

3.Safety standards

(1) The Secretary of State must, by regulations, make safety standards for AI systems.

(2) The safety standards must include requirements for the following matters—

(a) the design and development of AI systems;

(b) the testing of AI systems;

(c) the operation of AI systems;

(d) the maintenance of AI systems.

4.Transparency and accountability

(1) AI systems must be designed and developed in a way that ensures that they are transparent and accountable.

(2) This means that—

(a) Every person who develops or uses an AI system must take reasonable steps to ensure that the system is transparent and accountable.

(b) An AI system is accountable if it is possible to hold the person who developed or uses the system responsible for the decisions that the system makes.

(c) For the purposes of this section, an AI system is transparent if it is possible for a person to understand how the system works and the decisions that it makes.

(d) people must be able to challenge the decisions that AI systems make.

(e) the operation of AI systems must be monitored and recorded.

5.Prohibition on harmful purposes

(a) It is an offence for a person to develop or use an AI system for a harmful purpose.

(b) For the purposes of this section, a harmful purpose is a purpose that is likely to cause harm to individuals or society.

6.Compensation for harm

(1) If an AI system causes harm to a person, the person is entitled to compensation from the person who developed or used the AI system.

(2) The amount of compensation is to be determined by the court.

(3) Failure to allow reasonable challenge to the systems will be seen as harm and a fine for each breach will be liable.

7.Regulation of AI in sensitive areas

(1) The Secretary of State may by regulations make provision for the regulation of the use of AI in sensitive areas.

(2) Sensitive areas include—

(a) healthcare;

(b) law enforcement;

(c) financial services;

(d) education;

(e) the environment.

(3) The regulation must be designed to ensure that AI is used in a safe and ethical manner in these sensitive areas.

8.Taxation of AI

(1) The Secretary of State may by regulations make provision for the taxation of AI.

(2) The tax is to be designed to raise revenue to fund the regulation of AI and to mitigate the negative impacts of AI, such as job displacement and new forms of inequality. The taxation of AI may be based on—

(a) the use of AI;

(b) the harm caused by AI;

(c) the benefits derived from AI.

9. The AI research group

The AI research group will be responsible for assessing the transparency and accountability of AI.

They will commission research into the ethical guidelines for the development and use of AI, the safety standards for AI systems, and the regulation of AI in sensitive areas, disabled people and the economic impacts.

They will be responsible for public education, arranging a public forum and investigation of violations of this act.

The group will provide a report to the Secretary of State on options for regulations by 2027 with recommendations for the structure of a future group and then dissolve.

9. Penalties.

(1) Failure to cooperate with the AI research group or breach of the provisions of this act can attract a fine of up to 0.01% of the global earnings however this can be voided at the Secretary of State’s discretion.

(2) It is the intention of this statute to provide the powers for a temporary solution to the risks of AI and create the necessary mechanisms for a permanent statute to replace as soon as is practicable.

Commencement and sunset.

This Act comes into force on a day to be appointed by the Secretary of State and will sunset on the day 5 years following commencement.

Implications of The Artificial Intelligence (Ethics) Act 2023.

The requirement to be able to challenge the decisions should limit the risk of AI judges acting autonomously. This is important because it ensures that there is a human element in the decision-making process. If AI judges were able to act autonomously, there would be a risk that they would make decisions that are not in the best interests of justice.

The requirement to be socially beneficial ensures that AI cannot be developed simply to save money which will reduce the risk of unethical practice. This is important because it ensures that AI is used in a way that benefits society as a whole. If AI were developed simply to save money, there would be a risk that it would be used in a way that is harmful to society.

The requirement for privacy might appear to be covered by the DPA 2018 however AI has another problem. As AI such as large language models continues to learn there is a risk that data in the learning set will leak into the outputs. Imagine if details from one case leak into another case. This is a serious concern because it could lead to the disclosure of confidential information. It is important for lawyers to be aware of this risk and to take steps to mitigate it.

The extra rules dealing with safe and ethical manner in sensitive areas will provide additional protection for lawyers and their clients. This is important because it ensures that AI is used in a safe and ethical manner in sensitive areas, such as healthcare and law enforcement. If AI were used in a harmful way in these areas, it could have a serious impact on people's lives.

Making ‘failure to allow reasonable challenge’ an offence means that lawyers will have a new tool for making challenges. This is important because it gives lawyers a way to hold AI users accountable if they fail to allow reasonable challenge to AI decisions.

The rules on transparency and accountability will aid lawyers want to challenge apparent discrimination and use an AI system for a harmful purpose. This is important because it ensures that lawyers have the information they need to challenge AI systems that are being used in a discriminatory or harmful way.

This article was written with assistance from the Bard Large Language model from Google.

Doctor Mark Burgin, BM BCh (oxon) MRCGP is on the General Practitioner Specialist Register.

Dr. Burgin can be contacted on This email address is being protected from spambots. You need JavaScript enabled to view it. and 0845 331 3304 website drmarkburgin.co.uk

This is part of a series of articles by Dr. Mark Burgin. The opinions expressed in this article are the author's own, not those of Law Brief Publishing Ltd, and are not necessarily commensurate with general legal or medico-legal expert consensus of opinion and/or literature. Any medical content is not exhaustive but at a level for the non-medical reader to understand.

Image ©iStockphoto.com/Parradee Kietsirikul

All information on this site was believed to be correct by the relevant authors at the time of writing. All content is for information purposes only and is not intended as legal advice. No liability is accepted by either the publisher or the author(s) for any errors or omissions (whether negligent or not) that it may contain. 

The opinions expressed in the articles are the authors' own, not those of Law Brief Publishing Ltd, and are not necessarily commensurate with general legal or medico-legal expert consensus of opinion and/or literature. Any medical content is not exhaustive but at a level for the non-medical reader to understand. 

Professional advice should always be obtained before applying any information to particular circumstances.

Excerpts from judgments and statutes are Crown copyright. Any Crown Copyright material is reproduced with the permission of the Controller of OPSI and the Queen’s Printer for Scotland under the Open Government Licence.