We need to fight fake AI content using blockchain because otherwise “sh*t will get really weird, says Near founder Illia Polosukhin
Were rolling out one genuine use case for AI and crypto each day this week including reasons why you shouldnt necessarily believe the hype. Today: How blockchain can fight the fakes.
Generative AI is extremely good at generating fake photos, fake letters, fake bills, fake conversations fake everything. Near co-founder Illia Polosukhin warns that soon, we wont know which content to trust.
If we dont solve this reputation and authentication of content (problem), shit will get really weird, Polosukhin explains. Youll get phone calls, and youll think this is from somebody you know, but its not.
All the images you see, all the content, the books will be (suspect). Imagine a history book that kids are studying, and literally every kid has seen a different textbook and its trying to affect them in a specific way.
Blockchain can be used to transparently trace the provenance of online content so that users can distinguish between genuine content and AI-generated images. But it wont sort out truth from lies.
Thats the wrong take on the problem because people write not-true stuff all the time. Its more a question of when you see something, is it by the person that it says it is? Polosukhin says.
And thats where reputation systems come in: OK, this content comes from that author; can we trust what that author says?
So, cryptography becomes an instrument to ensure consistency and traceability and then you need reputation around this cryptography on-chain accounts and record keeping to actually ensure that X posted this and X is working for Cointelegraph right now.
If its such a great idea why isnt anyone doing it already?
There are a variety of existing supply chain projects that use blockchain to prove the provenance of goods in the real world, including VeChain and OriginTrail.
However, content-based provenance has yet to take off. The Trive News project aimed to crowdsource article verification via blockchain, while the Po.et project stamped a transparent history of content on the blockchain, but both are now defunct
More recently, Fact Protocol was launched, using a combination of AI and Web3 technology in an attempt to crowdsource the validation of news. The project joined the Content Authenticity Initiative in March last year
When somebody shares an article or piece of content online, it is first automatically validated using AI and then fact-checkers from the protocol set out to double-check it and then record the information, along with timestamps and transaction hashes, on-chain.
We dont republish the content on our platform, but we create a permanent, on-chain record of it, as well as a record of the fact-checks conducted and the validators for the same,” founder Mohith Agadi told The Decrypting Story.
And in August, global news agency Reuters ran a proof-of-concept pilot program that used a prototype Canon camera to store the metadata for photos on-chain using the C2PA standard.
It also integrated Starling Labs authentication framework into its picture desk workflow. With the metadata, edit history and blockchain registration embedded in the photograph, users can verify a pictures authenticity by comparing its unique identifier to the one recorded on the public ledger.
Academic research in the area is ongoing, too.
Is blockchain needed?
Technically, no. One of the issues hamstringing this use case is that you actually dont need blockchain or crypto to prove where a piece of content came from. However, doing so makes the process much more robust.
So, while you could use cryptographic signatures to verify content, Polosukhin asks how the reader can be certain it is the right signature? If the key is posted on the originating website, someone can still hack that website.
Web2 deals with these issues by using trusted service providers, he explains, but that breaks all the time.
Symantec was hacked, and they were issuing SSL certificates that were not valid. Websites are getting hacked Curve, even Web3 websites are getting hacked because they run on a Web2 stack, he says.
So, from my perspective, at least, if were looking forward to a future where this is used in malicious ways, we need tools that are actually resilient to that.
Dont believe the hype
People have been discussing this use case for blockchain to fight disinformation and deep fakes long before AI took off, and there has been little progress until recently.
Microsoft has just rolled out its new watermark to crack down on generative AI fakes being used in election campaigns. The watermark from the Coalition for Content Provenance Authenticity is permanently attached to the metadata and shows who created it and whether AI was involved.
The New York Times, Adobe, the BBC, Truepic, Washington Post and Arm are all members of C2PA. However, the solution doesnt require the use of blockchain, as the metadata can be secured with hashcodes and certified digital signatures.
That said, it can also be recorded on blockchain, as Reuters pilot program in August demonstrated. And the awareness arm of C2PA is called the Content Authenticity Initiative, and Web3 outfits, including Rarible, Fact Protocol, Livepeer and Dfinity, are CAI members flying the flag for blockchain.
Also read:
Real AI use cases in crypto, No. 1: The best money for AI is crypto
Real AI use cases in crypto, No. 2: AIs can run DAOs
Real AI use cases in crypto, No. 3: Smart contract audits & cybersecurity