- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
Too late. The systems we are building as a species will soon become sentient. We’ll have aliens right here, no UFOs required. Where the music comes from will no longer be relevant.
Ok perfect so since AGI is right around the corner and this is all irrelevant, then I’m sure the AI companies won’t mind paying up.
That’s not the way it works. Do you think the Roman Empire just picked a particular Tuesday to collapse? It’s a process and will take a while.
Is this how Disney becomes the owner of all of the AI companies too? Lol
Good. Burn it down. Bankrupt them.
If it’s so “critical to national security” then nationalize it.
the “burn it down” variant would only lead to the scenario where the copyright holders become the AI companies, since they have the content to train it. AI will not go away, it might change ownership to someone worse tho.
nationalizing sounds better; even better were to put in under UNESCO-stewardship.
Hard to imagine worse than the insane techno-feudalists who currently own it.
believe me, Disney is fucking ruthless in comparison to Anthropic.
As Anthropic argued, it now “faces hundreds of billions of dollars in potential damages liability at trial in four months” based on a class certification rushed at “warp speed” that involves “up to seven million potential claimants, whose works span a century of publishing history,” each possibly triggering a $150,000 fine.
So you knew what stealing the copyrighted works could result in, and your defense is that you stole too much? That’s not how that works.
If scraping is illegal, so is the Internet Archive, and that would be an immense loss for the world.
This is the real concern. Copyright abuse has been rampant for a long time, and the only reason things like the Internet Archive are allowed to exist is because the copyright holders don’t want to pick a fight they could potentially lose and lessen their hold on the IPs they’re hoarding. The AI case is the perfect thing for them, because it’s a very clear violation with a good amount of public support on their side, and winning will allow them to crack down even harder on all the things like the Internet Archive that should be fair use. AI is bad, but this fight won’t benefit the public either way.
I wouldn’t even say AI is bad, i have currently Qwen 3 running on my own GPU giving me a course in RegEx and how to use it. It sometimes makes mistakes in the examples (we all know that chatbots are shit when it comes to the r’s in strawberry), but i see it as “spot the error” type of training for me, and the instructions themself have been error free for now, since i do the lesson myself i can easily spot if something goes wrong.
AI crammed into everything because venture capitalists try to see what sticks is probably the main reason public opinion of chatbots is bad, and i don’t condone that too, but the technology itself has uses and is an impressive accomplishment.
Same with image generation: i am shit at drawing, and i don’t have the money to commission art if i want something specific, but i can generate what i want for myself.
If the copyright side wins, we all might lose the option to run imagegen and llms on our own hardware, there will never be an open-source llm, and resources that are important to us all will come even more under fire than they are already. Copyright holders will be the new AI companies, and without competition the enshittification will instantly start.
What you see as “spot the error” type training, another person sees as absolute fact that they internalize and use to make decisions that impact the world. The internet gave rise to the golden age of conspiracy theories, which is having a major impact on the worsening political climate, and it’s because the average user isn’t able to differentiate information from disinformation. AI chatbots giving people the answer they’re looking for rather than the truth is only going to compound the issue.
I agree that this has to become better in the future, but the technology is pretty young, and i am pretty sure that fixing this stuff has a high priority in those companies - it’s bad PR for them. But the people are already gorging themselves on faulty info per social media - i don’t see that chatbots are making this really worse than it already is.
The purpose of copyright is to drive works into the public domain. Works are only supposed to remain exclusive to the artist for a very limited time, not a “century of publishing history”.
The copyright industry should lose this battle. Copyright exclusivity should be shorter than patent exclusivity.
Copyright companies losing the case wouldn’t make copyright any shorter.
Their winning of the case reinforces a harmful precedent.
At the very least, the claims of those members of the class that are based on >20-year copyrights should be summarily rejected.
Copyright owners winning the case maintains the status quo.
The AI companies winning the case means anything leaked on the internet or even just hosted by a company can be used by anyone, including private photos and communication.
removed by mod
removed by mod
removed by mod
Copyright owners are then the new AI companies, and compared to now where open source AI is a possibility, it will never be, because only they will have enough content to train models. And without any competition, enshittification will go full speed ahead, meaning the chatbots you don’t like will still be there, and now they will try to sell you stuff and you can’t even choose a chatbot that doesn’t want to upsell you.
removed by mod
Actually that usually is how it works. Unfortunately.
*Too big to fail" was probably made up by the big ones.
No it won’t. Just their companies. Which are the ones making slop. If your AI does something actually useful it will survive.
You know, if they lose, their tech will probably become the property of copyright holders, which means your new AI Overlord has the first name Walt.
With the amount of money pouring in you’d think they’d just pay for it
Now now. You know that’s not how capitalism works.
Probably would have been cheaper to license everything you stole, eh, Anthropic?
I hope LLMs and generative AI crash and burn.
I’m thinking, honestly, what if that’s the planned purpose of this bubble.
I’m explaining - those “AI”'s involve assembling large datasets and making them available, poisoning the Web, and creating demand for for a specific kind of hardware.
When it bursts, not everything bursts.
Suddenly there will be plenty of no longer required hardware usable for normal ML applications like face recognition, voice recognition, text analysis to identify its author, combat drones with target selection, all kinds of stuff. It will be dirt cheap, compared to its current price, as it was with Sun hardware after the dotcom crash.
There still will be those datasets, that can be analyzed for plenty of purposes. Legal or not, they are already processed into usable and convenient state.
There will be the Web covered with a great wall of China tall layer of AI slop.
There will likely be a bankrupt nation which will have a lot of things failing due to that.
And there will still be all the centralized services. Suppose on that day you go search something in Google, and there’s only the Google summary present, no results list (or maybe even a results list, whatever, but suddenly weighed differently), saying that you’ve been owned by domestic enemies yadda-yadda and the patriotic corporations are implementing a popular state of emergency or something like that. You go to Facebook, and when you write something there, your messages are premoderated by an AI so that you’d not be able to god forbid say something wrong. An LLM might not be able to support a decent enough conversation, but to edit out things you say, or PGP keys you send, in real time without anything appearing strange - easily. Or to change some real person’s style of speech to yours.
Suppose all of not-degoogled Android installations start doing things like that, Amazon’s logistics suddenly start working to support a putsch, Facebook and WhatsApp do what I described or just fail, Apple makes a presentation of a new, magnificent, ingenious, miraculous, patriotic change to a better system of government, maybe even with Johnny Ive as the speaker, and possibly does the same unnoticeable censorship, Microsoft pushes one malicious update 3 months earlier with a backdoor to all Windows installations doing the same, and commits its datacenters to the common effort, and let’s just say it’s possible that a similar thing is done by some Linux developer believing in an idea and some of the major distributions - don’t need it doing much, just to provide a backdoor usable remotely.
I don’t list Twitter because honestly it doesn’t seem to work well enough or have coverage good enough.
So - this seems a pretty possible apocalypse scenario which does lead to a sudden installation of a dictatorial regime with all the necessary surveillance, planning, censorship and enforcement already being functioning systems.
So - of course apocalypse scenarios were a normal thing in movies for many years and many times, but it’s funny how the more plausible such become, the less often they are described in art.
It’s so very, very, deeply, fucking bleak. I can’t sleep at night, because I see this clear as day, I feel like a jew in 1938’s Berlin, only unlike that guy I can’t get out, because this is global. There is literally nowhere to run.
Either society is going to crash and burn, or we will see global war, which will crash and burn society.
There is no escape, the writing is on the fucking wall.
deleted by creator
deleted by creator
I wish god did this.
threatens to “financially ruin” the entire AI industry
No. Just the LLM industry and AI slop image and video generation industries. All of the legitimate uses of AI (drug discovery, finding solar panel improvements, self driving vehicles, etc) are all completely immune from this lawsuit, because they’re not dependent on stealing other people’s work.
But it would also mean that the Internet Archive is illegal, even tho they don’t profit, but if scraping the internet is a copyright violation, then they are as guilty as Anthropic.
IA doesn’t make any money off the content. Not that LLM companies do, but that’s what they’d want.
And this is exactly the reason why I think the IA will be forced to close down while AI companies that trained their models on it will not only stay but be praised for preserving information in an ironic twist. Because one side does participate in capitalism and the other doesn’t. They will claim AI is transformative enough even when it isn’t because the overly rich invested too much money into the grift.
Archival is a fair use.
Do you think that would rescue the IA from the type of people who made the IA already pull 300k books?
No. But going after LLMs wont make the situation for IA any worse, not directly anyway.
if the courts decide that scraping is illegal, IA can close up shop.
Profit (or even revenue) is not required for it to be considered an infringement, in the current legal framework.
Scrapping the Internet is not illegal. All AI companies did much more beyond that, they accessed private writings, private code, copyrighted images. they scanned copyrighted books (and then destroyed them), downloaded terabytes of copyrighted torrents … etc
So, the message is like piracy is OK when it’s done massively by a big company. They’re claiming “fair use” and most judges are buying it (or being bought?)
i say move it out of the us
they should have done that long ago, and if they haven’t already started a backup in both europe and china, it’s high time
Let’s go baby! The law is the law, and it applies to everybody
If the “genie doesn’t go back in the bottle”, make him pay for what he’s stealing.
I just remembered the movie where the genie was released from the bottle of a real genie, he turned the world into chaos by freeing his own kind, and if it weren’t for the power of the plot, I’m afraid people there would have become slaves or died out.
Although here it is already necessary to file a lawsuit for theft of the soul in the literal sense of the word.
I remember that X-Files episode!
Damn, what did you watch those masterpieces on? What kind of smoke were you sitting on then? Although I don’t know what secret materials you’re talking about. Maybe I watched something wrong… And what an episode?
The law absolutely does not apply to everybody, and you are well aware of that.
The law applies to everybody, but the law-makers change the laws to benefit certain people. And then trump pardons the rest lol.
Shouldn’t it?
This would mean the copyright holders like Disney are now the AI companies, because they have the content to train them. That’s even worse, man.
It’s not because they would only train on things they own which is an absolute tiny fraction of everything that everyone owns. It’s like complaining that a rich person gets to enjoy their lavish estate when the alternative is they get to use everybody’s home in the world.
do you know how much content disney has? go scrolling: https://en.wikipedia.org/wiki/List_of_assets_owned_by_the_Walt_Disney_Company e: that’s the tip of the iceberg, because if they band together with others from the MPAA & RIAA, they can suffocate the entire Movie, Book and Music world with it.
They have 0.2T in assets the world has around 660T in assets which as I said before is a tiny fraction. Obviously both hold a lot of assets that aren’t worthwhile to AI training such as theme parks but when you consider a single movie that might be worth millions or billions has the same benefit for AI training as another movie worth thousands. the amount of assets Disney owned is not nearly as relevant as you are making it out to be
The law is not the law. I am the law.
insert awesome guitar riff here
Reference: https://youtu.be/Kl_sRb0uQ7A
Well, theft has never been the best foundation for a business, has it?
While I completely agree that copyright terms are completely overblown, they are valid law that other people suffer under, so it is 100% fair to make them suffer the same. Or worse, as they all broke the law for commercial gain.
Well, theft has never been the best foundation for a business, has it?
History would suggest otherwise.
Oh no! Building a product with stolen data was a rotten idea after all. Well, at least the AI companies can use their fabulously genius PhD level LLMs to weasel their way out of all these lawsuits. Right?
PhD level LLM = paying MAs $21/hr to write summaries of paragraphs for them to improve off of. Google Gemini outsourced their work like this, so I assume everyone else did too.
I propose that anyone defending themselves in court over AI stealing data must be represented exclusively by AI.
Hilarious.
“ooh, so sorry, but your LLM was trained on proprietary documents stolen from several major law firms, and they are all suing you now”
That would be glorious. If the future of your company depends on the LLM keeping track of hundreds of details and drawing the right conclusions, it’s game over during the first day.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
And yet, despite 20 years of experience, the only side Ashley presents is the technologists’ side.