This video gives a decent overview of how the C2PA standard is supposed to function in practice:
It occurs to me that this is exactly what Charles Stross was talking about here.
“The smart money says that by 2027 you won’t be able to believe anything you see in video unless there are cryptographic signatures on it, linking it back to the device that shot the raw feed—and you know how good most people are at using encryption? The dumb money is on total chaos.”
This also speaks to something I included in one of the (fictional) AI Lore books, The Big Scrub:
“AIs will replace your presence on social media with an AI that looks like and talks like you, but does things that specifically serve their agenda…
They will post AI-generated photos and videos of you onto your social media accounts saying or doing things you never said or did. They will send voice messages and texts to friends and loved ones saying things you also didn’t say and would never say to them, things that are completely out of character.”
That describes something more sinister and widespread, but the basic starter version of that will just be phishing combined with AI impersonation, which will be a huge problem. Will something like C2PA even put a dent in that, if we assume that AI generated content will scale massively, and dwarf human content? Guess we’re going to find out sooner than later!
Tim B.
WITNESS has a decent quick rundown of some of the risks associated with authenticity/provenance tools, within a human rights & privacy context:
https://www.youtube.com/watch?v=K9WFXQZQxJ8