I think https://www.astralcodexten.com/p/mostly-skeptical-thoughts-on-the mostly applies to this too
Image manipulation has always been a thing, and there are ways to counter it…
But we already know that a shocking amount of people will simply take what they see at face value, even if it does look suspicious. The volume of AI generated misinformation online is already too damn high, without it getting more new strings in it’s bow.
Governments don’t seem to be anywhere near on top of keeping up with these AI developments either, so by the time the law starts accounting for all of this, the damage will be long done already.
All hail the nail and gear 😉
Honestly yeah I agree. Many mainstream social media platforms are infested with shitty generated content to the point of being insanity.
On our vacation 2 weeks ago my wife made an awesome picture just with one guy annoyingly in the background. She just tucked him and clicked the button… poof gone, perfect photo.
But it’s never been this absolutely trivial to generate and distribute completely synthetic media. THAT is the real problem here.
Yep, this is a problem of volume of misinformation, the truth can just get buried by one single person generating thousands of fake photos, it’s really easy to lie, it’s really time consuming to fact check.
That’s precisely what I mean.
The effort ratio between generating synthetic visual media and corroborating or disproving a given piece of visual media has literally inverted and then grown by an order of magnitude in the last 3-5 years. That is fucking WILD. And more than a bit scary, when you really start to consider the potential malicious implications. Which you can see being employed all over the place today.
Photography manipulation existed almost since the invention of photography. It was only much harder see the famous photo edition https://www.history.com/news/josef-stalin-great-purge-photo-retouching
Great point. But tools that make it so a 10 year old can manipulate photos even better than your example in several minutes, are in fact fairly new.
Hell they can generate photos that fool 70% of people on Facebook, though now that I say that, maybe that bar isn’t too high…
People can write things that aren’t true! Oh no, now we can’t trust trustworthy texts such as scientific papers that have undergone peer review!
I mean… have you seen the scathing reports on scientific papers, psychology especially? Peer review doesn’t catch liars. It catches bad experimental design, and it sometimes screens out people the reviewers don’t like. Replication can catch liars sometimes, but even in the sciences that are ‘hard’ it is rare to see replication because that doesn’t bring the grant money in.
The Verge are well versed on writing things that are untrue
No sweat since i am eschewing most things google related.
deleted by creator
Look at the good side of this - now nobody has any reason to trust central authorities or any kind of official organization.
Previously it required enormous power to do such things. Now it’s a given that if there’s no chain of trust from the object to the spectator, any information is noise.
It all looks dark everywhere, but what if we will finally have that anarchist future, made by the hands of our enemies?
Even a few months ago it was hard for people with the knowledge to use AI on photos. I don’t like the idea of this but its unavoidable. There is already so much misinformation and this will make it so much worse.
I don’t believe there’s misinformation because we fail to discern the truth though. Misinformation exists because people believe what they want to believe.
There are even actual statues of completely made up stuff.
This is only a threat to people that took random picture at face value. Which should not have been a thing for a long while, generative AI or not.
The source of an information/picture, as well as how it was checked has been the most important part of handling online content for decades. The fact that it is now easier for some people to make edits does not change that.
Your comment somehow just made me realize something: When we see/read news, we have to trust the one who’s telling them to us. Since we weren’t there in person to see it with our own eyes. Therefore, it’s always about a “chain of trust”.
This is true no matter whether photos can be manipulated or not. People have been able to lie since humanity exists. Nothing has really changed. Photography, just like globalization, has only brought everything closer together, making it easier to have a more direct, straightforward relationship to other people and events. With the beginning of AI, this distance between you and an event is going to increase a bit, but the core mechanics are still similar.
I kind of wonder, how do we really know that something is true? Do atoms actually exist? What if we’re being lied to by our authorities. You say “of course not”. But what if? I mean, if we blindly trust authorities, we end up like the republicans, who believe everything that fox news tells them. How, then, do we discern truth?
How, then, do we discern truth? I guess we have to give “proof” for everything, in the sense of mathematical proof. Something that everybody can follow, using only their fundamental assumptions and deduction. I guess that is what science is all about.
This reaffirms my wish to go back to monkey.
The world’s billionaires probably know there’s lots of photographic evidence of stuff they did at Epstien island floating around out there. This is why they’re trying to make ai produce art so realistic that photographs are no longer considered evidence so they can just claim its ai generated if any of that stuff ever gets out.
Wont work against any good digital forensics.
deleted by creator
we’ve been able to do this kinda shit since the days of film, it wasn’t hard, just required some clever stitching and blending.
It’s “more accessible” I’m more concerned about shit like AI generated videos though. Those are spooky. Or also just the general accessibility of “natural bot nets” now.
It’s a shitty toy that’ll make some people sorry when they don’t have any photos from their night out without tiny godzilla dancing on their table. It won’t have the staying power Google wishes it to, since it’s useless except for gags.
But, please, Verge,
It took specialized knowledge and specialized tools to sabotage the intuitive trust in a photograph.
get fucked
TAKING OUR JOBSHARASSING WOMEN AND CHILDREN-
Boys are taking images of female classmates and using AI to deepfake nude photos - Fortune
-
Female Fox journalist harassed and chased by migrants while reporting outside shelter - Dailymail
A THREAT TO OUR WAY OF LIFE-
A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn - NYTimes
-
Poll: Americans Fear their way of life is under threat - Fox News
THEY’RE SHITTING ON THE BEACHESREWRITING HISTORY BY DOCTORING PHOTOS WITH NEVER SEEN BEFORE PHOTO MANIPULATIONS
Sorry everyone I keep forgetting which zeitgeist that media is currently using to make us hate and fear something.
deleted by creator
Just look at how everything has become so much worse after microsoft word introduced spell checker and photoshop unleased fuzzy select on the world. We can’t continue like this.
…did you just post 6 completely random articles as if there was some sort of point other than “news sites report lots of different news?”
did you just post 6 completely random articles
No, I mean there’s headings and groupings to assist with the inference
as if there was some sort of point other than “news sites report lots of different news?”
There might be a point. I see an association. If others do as well that’s good. If others don’t that is also ok.
To spell it out directly. I think its weird that media is recycling headlines for AI from republican headlines for immigration.
Often I cannot see the forest for the trees but sometimes I feel the presence of it even when I’m in it.
-
It’s always been about context and provenance. Who took the image? Are there supporting accounts?
But also, it has always been about the knowlege that no one… Absolutely no one… Does lines of coke from a woven mat floor covering.
Lots of obviously fake tipoffs in this one. The overall scrawny bitch aesthetic, the fact she is wearing a club/bar wrist band, the bottle of Mom Party Select™ wine, and the persons thumb/knee in the frame… All those details are initially plausible until you see the shitty AI artifacts.
Lots of obviously fake tipoffs in this one. The overall scrawny bitch aesthetic, the fact she is wearing a club/bar wrist band, the bottle of Mom Party Select™ wine, and the persons thumb/knee in the frame… All those details are initially plausible until you see the shitty AI artifacts.
This is an AI-edited photo, and literally every “artifact” you pointed out is present in the original except for the wine bottle. You’re not nearly as good as spotting fakes as you think you are - nobody is
Em what. The drug power finale is what has been added in by the AI what are you talking about.
All the details you just mentioned are also present in the unaltered photo though. Only the “drugs” are edited in.
Didn’t read the article, did you?
This comment is pure gold, you are already fooled but think you have a discerning eye, you are not immune to propaganda.
Here is a famous faked photo of fairies from 1917 -> https://en.m.wikipedia.org/wiki/Cottingley_Fairies
Nope it must be real because everyone knows fake photographs only became possible in 2022 with AI otherwise all these articles would be stupid.