Biden’s Safeguards against Artificial Intelligence

Image of AI/Flickr.com

UNITED STATES — Joe Biden, this Monday October 30, has signed an executive order to restrict the use of artificial intelligence (AI). 

Recently, there has been a rapid development of artificial intelligence. In Biden’s words, “We’re going to see more technological change in the next ten, maybe the next five years, than we’ve seen in the last 50 years.”  From chatbots to image recognition capabilities, AI has been on the rise in a multitude of ways. However, its capabilities have led to a lot of ethical concerns and questions as to whether some restrictions should be implemented. 

The goal of the regulations is to protect the privacy of Americans, advance equity and civil rights, improve safety and security among other things. This will supposedly be accomplished in several ways: 

For one, developers of powerful AI systems are required to share vital information such as their safety test results with the U.S. government. What this means is every company that develops a foundation model capable of threatening the nation’s security, public health, or economic security would have to notify the government when training the model and disclose the results to the federal government. This would be done in concurrence with the Defense Production Act. 

Furthermore, the National Institute of Standards and Technology was named as responsible for setting up the standards for red-team testing to ensure that technology is safe before being released for the public’s use. Red-team testing is a way to assess weaknesses and how far an attacker of a system could penetrate. In other words, it is a way of finding out how much an attacker can bypass an organization’s security before being detected. This test also looks at how effective the organization’s technology is at handling the attack. The Departments of Homeland Security and Energy must also see to any threat classified as a biological, chemical, radiological, cybersecurity, or nuclear attack towards essential infrastructure. 

Additionally, protection against fraud was also addressed. This would be done by having anything produced by AI watermarked and new practices meant for detecting content generated by AI. 

Besides these listed, there are many more regulations meant to reduce the potential harms of AI. According to Biden, “To realize the promise of AI and avoid the risk, we need to govern this technology.” 

Still, despite this action, there is still criticism for a multitude of reasons. Some believe the order is too strict, while others believe it has not done enough in specific areas. 

Cody Venzke, who serves as a senior policy counsel from the National Political Advocacy Department of the American Civil Liberties Union (ACLU) states, “ . . .The order raises significant red flags as it fails to provide protection from AI in law enforcement, like the role of facial recognition technology in fueling false arrests, and in national security, including in surveillance and immigration.

There are also criticisms of the vague language used for enforcement and the long-term view. The MIT technology review writes that the order, “lacks specifics on how the rules will be enforced”. As for the long term, executive orders can easily be rescinded by another following president. 

Either way, there still are many who see this as a necessary step for developing better AI regulations in the future. 

About The Author

Related posts

Leave a Reply

X Close

Newsletter Sign-Up

X Close

Monthly Subscriber Sign-Up

Enter the maximum amount you want to pay each month
$ USD
Sign up for