The US, the UK, and 16 other countries have announced new guidelines for the development of artificial intelligence (AI), based on the principle that these systems need to be secure by design.
The international agreement, which is the first of its kind, represents a leap forward in AI safety, by prioritizing the well being of consumers over the rapid development of commercial AI products.
However, with OpenAI already developing an artificial general intelligence (AGI) system that researchers fear may be a “threat to humanity” and other companies playing catch-up with their own AI models, it's unclear whether the non-binding guidelines go far enough.
Global Push to Make AI Systems ‘Secure By Design'
The US, the UK, and dozens other countries including Germany, Italy, Australia and Singapore have released new guidelines for the development of AI technology.
The 20-page agreement which was published by the UK's National Cyber Security Centre (NCSC) emphasizes that the systems should be “secure by design”, and is broken down into four main areas: secure design, secure development, secure deployment, and secure operational and maintenance.
🔎 Want to browse the web privately? 🌎 Or appear as if you're in another country?
Get a huge 86% off Surfshark with this special tech.co offer.
“The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority,” – The Cybersecurity and Infrastructure Security Agency (CISA)
The directive encompasses each stage of the AI development lifecycle, and intends to protect the public from misuse by ensuring “the technology is designed, developed, and deployed in a secure manner” according to CISA.
Aside from advocating for safe AI development, the agreement also requires companies to report vulnerabilities in the technology through a bug bounty program, so exploits can be identified and stamped out quickly.
Biden Takes Europe's Lead on AI Safety
This agreement is the first time in history that countries from across the globe have banded together to create recommendations for secure AI development. However, they're the latest in a string of actions the US government has been taking to manage risks associated with the technology on home soil.
At the end of October, the White House launched its first-ever executive order on AI development, addressing a number of concerns like algorithmic bias, data privacy, and job displacement.
The crackdown represented a hardened stance the government is taking towards AI. But while White House Chief of Staff Bruce Reed claimed these were the “strongest set of actions” any government has taken to safeguard AI, the US's pace of change still lags behind its European neighbors.
First proposed in 2021, the EU passed its AI Act in June of this year, which classifies different AI systems by the potential risk they could pose to users. The framework aims to turn Europe into a global hub for trustworthy AI, and unlike the recent agreement, mandates changes through legislature, instead of non-binding guidelines.
Do these Guidelines Go Far Enough?
With AI creating lucrative opportunities for Silicon Valley's top dogs – exemplified in OpenAI's pending AGI model ‘Project Q*' – it's unclear whether these non-binding recommendations will convincing enough to generate real change.
What's more, with research revealing that there are “virtually unlimited” ways to evade ChatGPTs and Google Bard's security features, the new guardrails also fail to address the security concerns of AI models already in the wild.
However, any global effort to ramp up safeguards in the AI wild-wild west undoubtedly marks a step in the right direction.