The regulations are finally starting to gain a little steam for platforms like ChatGPT, with the EU Commissioner calling for content generated by AI to be labeled clearly in hopes of curtailing the spread of disinformation.
It's not an understatement to note that generative AI platforms are slowly but surely changing the way business gets done. The AI tools are being used by companies around the world for everything from emails to coding, leading to a general reconsideration of what it means to work.
However, because of the break-neck speed of roll-out, meaningful regulations have been slow to materialize. The EU Commissioner is making a big push to change that soon.
EU Commissioner: AI Content Should Be “Clearly Labeled”
During a press conference on Monday, the deputy head of European Commission suggested that companies developing generative AI platforms should “clearly label” content that is produced by these services.
“Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation. Signatories who have services with a potential to disseminate AI generated disinformation should in turn put in place technology to recognize such content and clearly label this to users.” – Vera Jourova, deputy head of European Commission
Considering companies involved with this kind of technology, like Microsoft and Google, have already the signed up to the EU Code of Practice, they're expected to outline plans for this kind of safety measure sometime next month.
Can Generative AI Be Used to Spread Disinformation?
The primary reasoning behind the labeling of content from generative AI platform is that, given the rise and effectiveness of disinformation campaigns over the past few years, these services provide bad actors with an unprecedented level of power to spread this kind of vitriol.
So can generative AI actually be used to spread disinformation? One study found that Bard, the generative AI platform from Google, is absolutely capable of this kind of action. In fact, when asked to write content about 100 different topics commonly considered to be misinformation just two months ago, the platform happily did so 76 times.
Now, whether labeling the content as “generated by AI” would help is another story, especially when bad actors will likely come up with ways to get around these kinds of regulations in the long run. Still, we have to do something.
The Content of Theseus
The ship of Theseus is a thought experiment that asks, “if, over time, you replace every piece of wood in a ship, is it still the original ship?”
When it comes to content generated by AI, and whether or not it should be labeled, this thought experiment takes on even more significance, as generative AI platforms have admitted that their content is often convincingly incorrect and requires human editing to ensure accuracy.
In terms of this potential regulation, when does content edited by a human cease to be generated by AI? How can you draw the line between these two distinctions, particularly when disinformation is at stake?
All that to say, we've clearly only scratched the surface of how generative AI platforms like ChatGPT and Google Bard will impact the world on a global scale.