On Monday, US President Joe Biden issued a broad and ambitious executive order on artificial intelligence (AI), catapulting the US to the forefront of conversations about regulating AI.
By doing this, the US will leapfrog other states in the race to rule AI.
Europe previously led the way with its AI law, which was passed by the European Parliament in June 2023 but will not come into full effect until 2025.
The presidential executive order is a collection of initiatives to regulate AI – some of which are good, and some of which seem rather half-baked.
It aims to address harms ranging from immediate harms such as AI-generated deepfakes, through intermediate harms such as job losses, to longer-term harms such as the much-debated existential threat that AI may pose to humans.
Biden’s ambitious plan
The U.S. Congress has been slow to pass significant regulation of big tech companies. This presidential executive order is likely as much an attempt to circumvent an often gridlocked Congress as it is to spark action. For example, the order calls on Congress to pass bipartisan data privacy legislation.
Bipartisan support in the current climate? Good luck with that, Mr President.
The executive order will reportedly be implemented in the next three months to a year. It covers eight areas:
1. safety and security standards for AI
2. privacy protection
3. equality and civil rights
4. consumer rights
5. jobs
6. innovation and competition
7. international leadership
8. AI management.
On the one hand, the decision covers many concerns of academics and the public. For example, one of the directives is to issue official guidelines on how to watermark AI-generated content to reduce the risk of deepfakes.
It also requires companies developing AI models to prove they are safe before they can be rolled out for wider use. President Biden said:
“That means companies must tell the government about the large-scale AI systems they are developing and share rigorous independent testing results to prove they do not pose a national security risk to the American people.”
The potentially disastrous use of AI in warfare
At the same time, the order fails to address a number of pressing issues. For example, it doesn’t directly address how to deal with killer AI robots, a vexing topic under discussion in the United Nations General Assembly over the past two weeks.
This concern should not be ignored. The Pentagon is developing swarms of low-cost autonomous drones as part of its recently announced Replicator program. Similarly, Ukraine has developed homegrown AI-powered attack drones that can identify and attack Russian forces without human intervention.
Could we end up in a world where machines decide who lives or dies? The executive order only asks the military to use AI in an ethical manner, but does not define what that means.
And what about protecting elections from AI-powered weapons of mass persuasion? A number of media outlets have reported on how the recent elections in Slovakia may have been influenced by deepfakes. Many experts, including myself, are also concerned about the misuse of AI in the upcoming US presidential elections.
Without strict controls in place, we risk living in an age where nothing you see or hear online can be trusted. If this sounds exaggerated, consider that the US Republican Party has already released a campaign ad that appears to be generated entirely by AI.
Missed opportunities
Many of the initiatives in the executive order could and should be replicated elsewhere, including Australia. We too must, as the order requires, provide guidance to landlords, government programs, and government contractors on how to ensure that AI algorithms are not used to discriminate against individuals.
We must also, as the order requires, address algorithmic discrimination in the criminal justice system, where AI is increasingly used in high-stakes environments, including sentencing, parole and probation, pretrial release and detention, risk assessments, supervision and predictive policing. , to name a few.
AI has also been controversially used for such applications in Australia, such as in the Suspect Targeting Management Plan used to monitor young people in New South Wales.
Perhaps the most controversial aspect of the executive order is the one that addresses the potential harms of the most powerful so-called “cross-border” AI models. Some experts believe that these models – which are being developed by companies like Open AI, Google and Anthropic – pose an existential threat to humanity.
Others, including myself, believe such concerns are overblown and could distract from more immediate harms, such as misinformation and inequality, that are already harming society.
Biden’s order invokes extraordinary war powers (specifically the Defense Production Act of 1950, enacted during the Korean War) to require companies to notify the federal government when they train such border models. It also requires them to share the results of “red-team” security testing, in which internal hackers use attacks to examine software for bugs and vulnerabilities.
I would say that it will be difficult and perhaps impossible to oversee the development of boundary models.
The above guidelines will not prevent companies from developing such models abroad, where the U.S. government has limited power. The open source community can also develop them in a distributed way – a way that makes the tech world ‘borderless’.
The impact of the executive order will likely have the greatest impact on the government itself and how it uses AI, rather than on companies.
Nevertheless, it’s a welcome bit of action. British Prime Minister Rishi Sunak’s AI Safety Summit, taking place over the next two days, now seems like something of a diplomatic talkfest in comparison.
It makes one jealous of the presidential power to get things done.
Toby Walsh, Professor of AI, Research Group Leader, UNSW Sydney
This article is republished from The conversation under a Creative Commons license. Read the original article.