I mean, people use dash-cams protect themselves in case of a car crash, so do you think people in the future would also use body-cams protect themselves in case of being involved in a fight?
Nah once deepfakes become simple enough for the majority to make, citizen-created video evidence will be worthless.
Only ‘tamper-proof’ sources will be trusted even when they will be tampered with.
Then some company will put out a camera that uploads all the video to the cloud with verification and makes it read only.
I don’t remember if this came from cybersecurity logging practices or from anti-deepfake advice I saw online, but maybe physical cameras can constantly upload video evidence to a reliable third-party server which will save the checksums of, suppose, every minute’s worth of data. Then there would be no way for the source of the video to retroactively replace the content on that server with deepfake videography without this leaving evidence in the checksums.
I’m not sure if/how the third-party server would be able to tell that it’s listening to a real bodycam/dashcam rather than simply receiving data from a deepfake-generating AI model. I guess to use a video for evidence, you’d have to have corroborating evidence from nearby people who recorded the same event from a different angle (AI-generated videos would have trouble with creating different angles of the same event, right?).
And even if you can’t use a video as evidence, witness testimony has always been used in court. Someone else on Lemmy wrote that people have been making arguments in court since before there was photo/video evidence; our justice system (whoever “our” refers to) will simply revert to pre-camera ways when a photo/video cannot be trusted.
Another option related to the checksum solution is that camera manufactorers could implement a system on the physical camera where the raw file is tagged with some checksum/stamp and the same is stored on the device. In a situation where the validity of the photo/video in question, you could use the raw files and the physical device that captured it as the proof.
I’m sure we will see multiple attempts to solve this, whether it be adverserial “de-fake” AIs, some physical verification or something completely different. It will be interesting to see what work and not, and what may turn out to become the standard way of verification