The U.S. has taken a significant first step towards regulating AI. President Joe Biden has signed an executive order on safe, secure and trustworthy AI, imposing a series of obligations on the AI technology sector based on the principles of transparent and responsible machine development.
This lays the groundwork for a future where people can safely benefit from the enormous potential of artificial intelligence (AI), while limiting the associated risks.
According to the White House announcement, the order is part of a long-term strategy for responsible innovation and builds on the President’s previous actions (including work that led to voluntary commitments from leading technology companies to advance the safe, secure and trustworthy development of AI), as well as numerous initiatives and regulations planned for the near future.
Executive order on safe, secure and trustworthy AI
What obligations will be imposed on AI-based solution providers? Referring primarily to government activities, the order regulates a relatively narrow range of applications, introducing, among other things, new stringent safety and security standards for AI and mandating specific actions. The list is as follows:
- Developers of the most powerful AI systems will be required to share their safety test results and other critical information with the U.S. government
- Standards, tools, and tests will be developed to help ensure that AI systems are safe, secure, and trustworthy. It is worth noting that standards for critical infrastructure, cybersecurity, or radiological and biological security are the most significant steps taken to date to enhance AI security
- Americans must be protected from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The actions of public authorities are intended to make it easier for Americans to ensure that the communications they receive from the government are authentic – while setting an example for the private sector and governments around the world
- New standards for biological synthesis screening will be developed to protect against the risks of using AI to engineer dangerous biological materials. Meeting these will be a condition of federal funding for life-science projects
- AI’s potentially game-changing cyber capabilities will be harnessed to make software and networks more secure. This will include an advanced cybersecurity programme to develop AI tools to find and fix vulnerabilities in critical software
The order also requires the development of a National Security Memorandum to direct further actions in this area. This includes ensuring the safe, ethical and effective use of AI by the U.S. military and intelligence community, and countering adversaries’ military use of AI.
Sector Risk Management Agencies
Designated for 16 critical infrastructure sectors, Sector Risk Management Agencies (SRMAs) are to play an important role in both analysing and minimising the risks associated with the safe deployment and use of AI-based solutions.
SRMAs will assist their respective secretaries in developing tools to assess the capabilities of machines. The officials will be tasked with identifying areas that could pose a biological or chemical threat to the nuclear energy sector, critical infrastructure and energy security, among others.
EU approach to AI risks
The EU emphasises the importance of risk analysis and risk management for the enforceability of AI regulations, as evidenced, for example, by the draft AI Act. The Act defines four different levels of risk in AI: unacceptable, high, limited and minimal.
Systems whose use is classified as unacceptable will be banned outright. In this way, the EU lawmakers want to protect particularly sensitive data, such as biometric data. The use of AI to scan people in public spaces (i.e. real-time biometric identification) or to classify them based on their individual characteristics will be unacceptable (apart from some exceptions for law enforcement authorities).
The social responsibility of AI
Significantly, the US administration’s document emphasises the social responsibility of AI, incorporating the US’s views on anti-discrimination. As the White House release states:
“Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing”.
This is why it is important that AI be a technological solution that reduces societal inequity, rather than one that contributes to discrimination and exacerbates inequity. The order thus provides guidelines for AI systems in areas such as housing, social welfare, justice, education and the labour market.
When it comes to the last area, the White House not only highlights the promise of improved productivity from developers of AI-based tools, but also foresees new dangers related to, for example, increased surveillance, job displacement or the collection and processing of workers’ data. To address these dangers in advance, the executive order provides guidance to federal agencies to prevent underpayment of workers, unfair evaluations of job applications, and interference with workers’ ability to organise. The presidential order is also to be followed by a report on AI’s potential labour-market impacts, and a study on how to strengthen federal support for workers facing problems caused by AI activities.
A number of state and federal initiatives are also a result of the new regulations, including:
- Developing guidance for agencies’ use of AI, including providing clear standards for protecting rights and safety
- Helping agencies acquire specified AI products and services more quickly, cheaply and effectively by expediting contracting or the hiring of AI professionals
- Providing AI training for employees at all levels
America wants to lead the way in technology solutions and attract the best
From the perspective of the European legislative environment, it is welcome to see so much attention being paid to the early regulation of the use of generative artificial intelligence in the public sector and, ultimately, in the private sector.
When it comes to the development of artificial intelligence, the US is positioning itself as a leader in both technological solutions and regulation. This is certainly a different approach to America’s more lenient stance towards for example, regulation in the area of data protection or privacy in general.
The Executive Order aims to ensure US leadership by adopting solutions to catalyse AI research through:
- A pilot of the National AI Research Resource, which will provide key AI resources and data to researchers and students working in the field
- Expanded grants for AI research in vital areas such as healthcare and climate change
The Executive Order aims to promote fair competition in the expanding AI ecosystem by ensuring that small entrepreneurs and developers have access to adequate technical assistance and resources – so they have a level playing field in commercialising breakthrough AI solutions. Finally, the Executive Order provides for increased visa opportunities for highly skilled immigrants. The visa process will be modernised to attract new AI workers and students to the USA.
Protecting citizens’ privacy
The Executive Order draws attention to the need to protect personal data. The use of artificial intelligence capabilities should be combined with advanced data protection technologies, such as cryptographic tools. The document also highlights the need to strengthen and fund research to accelerate the development of privacy-preserving techniques.
President Biden called on Congress to accelerate work on privacy legislation to protect all Americans, especially children. In doing so, it is essential to develop appropriate guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, data collection methods, and the purposes for which personal information is used. At the same time, the Presidential Administration emphasizes the importance of international cooperation in this area.
It is also critical to use artificial intelligence to address global challenges, such as advancing sustainable development and mitigating threats to critical infrastructure.
Biden’s AI revolution
News of the executive order has been widely reported in the media.
There has been no shortage of voices on the so-called Biden revolution, highlighting the groundbreaking nature of the new solutions. Experts in the field of artificial intelligence point to the rightness and necessity of the regulations, while at the same time occasionally expressing concern over whether they will sufficiently protect citizens from potentially harmful actions.
On the other hand, there are also many voices that point to the rigidity of the adopted approach. Unsurprisingly, some in the industry see the new regulations as an attempt to stifle technological innovation. Last week, a global summit hosted by UK Prime Minister Rishi Sunak took place at Bletchley Park, where key figures from the world of power and new technologies debated the need to control the development of artificial intelligence. Although the debaters represented different interest groups, they agreed on the essential point that leaving AI issues out of public control would pose a serious civilisational risk to humanity, and possibly even an existential one.
And for the above reasons, we are closely monitoring all regulatory developments in the AI market and will keep you informed.
Questions? Contact us