Google finds itself at the eye of the AI ethics storm once again this week, as the contentious issues of AI “deepfake” technology and also data scraping have the tech giant battering down the hatches both at home and abroad.
In the US, Google is seeking to agree a deal with record industry superpower Universal Music that compensates artists for the use of their voice and melodies by generative AI platforms, which imitate these to create “deepfake” songs, usually without the original artist's consent.
Down Under in Australia, Google is in a seemingly more combative mood, arguing in court that the burden should be on publishers to opt-out of data scraping, if they don't want their content to be used.
I'm A Barbie Girl In An AI World
Google is currently facing two significant AI dilemmas, in seemingly contrasting ways, and on opposite sides of the globe. The issues at hand are the use of AI to repurpose music made by human artists and the use of AI to repurpose human-generated web content.
The more pressing of the two is arguably Google's negotiations with fellow global conglomerate Universal over a music licensing deal that effectively covers sampling by generative AI platforms.
🔎 Want to browse the web privately? 🌎 Or appear as if you're in another country?
Get a huge 86% off Surfshark with this special tech.co offer.
This was first reported by the Financial Times, which cites four people familiar with the matter, as confirming the discussions are currently at an early stage.
The ultimate goal of these talks, according to the sources, is to develop a tool that both lets fans use AI to create so-called “deepfake” songs for Google-owned YouTube content, but also pays the copyright owner for what is being used.
At present, AI is a bit of a Wild West when it comes to what you can find out. You may well have heard a long-deceased Johnny Cash “covering” Barbie Girl if you spend enough time online, while no doomscrolling session on Instagram is complete until you see Harry Potter/Breaking Bad/The Sopranos reimagined as a 90s sitcom.
Precedent Exists In The Form of YouTube
Unsurprisingly, artists themselves are less than impressed with their voices being used in uncontrollable ways. Big names such as Drake (one of many global megastars, including Taylor Swift and BTS to be represented by Universal) have spoken out against their works being effectively cloned without permission.
In addition to the viral rendition of Barbie Girl, you now also have entire social media outlets dedicated to AI songs, with rappers Tupac and Notorious B.I.G. among the other deceased stars to feature prominently on the @PluggingAI channel
What might a final deal between Universal and Google look like? It's too early to say, but a precedent of sorts exists in the current agreement between YouTube and the music industry, over the use of copyrighted songs in user-created video. This system, which was hard-fought to begin with, already pays out an estimated $2bn a year to the music industry and its artists, for the use of their songs.
Google Opts Out of Responsibility For Web Scraping
There's little doubt a deal with Universal to help Google create a legitimate AI music tool would be a major boost for the search giant as it looks navigate largely uncharted technological waters. Rivals such as Microsoft and its search engine, Bing, are likely to benefit from tight integration with ChatGPT, though Google has its own chatbot product in Bard – and the Bard vs ChatGPT debate is another matter entirely.
Less certain is how long it will be permissible for Google and other big AI players to scrape the web as part of training their AI systems. According to Google, the issue does also warrant addressing, though it argues that responsibility rests with publishers to opt-out of have their data and content trawled. That's per a Guardian report that cites Google's submission to a review of the AI regulatory framework in Australia.
Apparently, the Mountain View-based firm argues that copyright law should be amended to explicitly allow for web scraping by default, saying that it amounts of “fair use” of publicly published content.
Those who don't want to have their work included in AI training models should be able to opt-out or otherwise specify how their works can be used, not unlike how Creative Commons licenses currently operate. While what happens in Australia could end up having no bearing on the wider AI and copyright debate, it's still an interesting one to monitor as AI lawsuits like the one fronted by Sarah Silverman descend on courtrooms in the US and all over the world.