According to a recent survey by the Media, Entertainment, and Arts Alliance (MEAA), over 50% of media and creative professionals in Australia are very worried about the growth of AI. The study involved almost 400 members and revealed that 56% of the participants are very concerned about AI, and 30% are somewhat concerned. A mere 2% of the respondents said they had no concerns at all.
Also Read: Is AI-Generated Code a Legal Grey Area or Clear Ownership?
The study provides evidence of the population’s inclination toward a more active government role because almost all participants wanted stricter rules on AI applications. This has led to demands for quick intervention by the law to regulate what is seen as a threat to their jobs.
AI Sparks Intellectual Property Theft Fears
According to the survey, the most critical risk of AI is the spread of misinformation which is concerning for 91% of the respondents. Of these, 74 percent are extremely concerned, while 17 percent are moderately concerned. Moreover, 72 percent of respondents are very concerned with the threat of theft of intellectual or creative property, while 18 percent are somewhat concerned.
Cooper Mortlock, a voice actor, explains how these issues translate to the real world. Mortlock stated that his voice was used in an animated series without his consent, which proves that creative workers are vulnerable to AI misuse. He states that he had a contract for 52 episodes and had to record 30 episodes before the project was called off.
“But when we reached episode 30, they cancelled it, and then about a year later, after the contract expired, the producer released another episode using an AI clone of my voice and the voices of the other actors.”
Legal System Struggles with AI Disputes
In an attempt to solve the problem, Mortlock tried to refer it legally but was met with major hurdles. Despite having sent a cease and desist letter, the producer insisted that he never used AI and that he used vocal impersonators and various technologies instead.
“Through their lawyer, the producer repeated the denial, while adding, “even if they had used AI, that would have been allowed under the terms of your contract”
These legal loopholes are vivid examples of the need for clearer laws that would shield creative personnel from being exploited. The MEAA legal opinion aligned with the producer’s argument by arguing that current contract laws may not capture the peculiarities of the incorporation of AI.
Also Read: Embracing AI in Legal Practice
The survey results show that the majority of the media and creative workers support government intervention. Among the respondents, 97% stated that legislative measures are required to safeguard their work from unauthorized AI usage.
Recently, ChatGPT pulled its chatbot voice, Sky, after concerns it sounded similar to Johansson. It is worth noting that advances in technology allow characteristics like voices to be replicated with greater ease and accuracy.