You have Thirteen hours in which to solve the labyrinth before your baby AI becomes one of us, forever.
While AI David Bowie sings you rock lullabies.
Considering how many false positives Cloudflare serves I see nothing but misery coming from this.
Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes “resemble” SQL injection. Nurses type all sorts of random stuff in their notes so there’s no managing that. Drives me insane!
In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.
Can’t you configure exceptions for behaviours?
I’m not sure what can be done at the free tier. There is a switch to turn on AI not blocking, and it breaks federation.
You can’t whitelist domains because federation could come from and domain. Maybe you could somehow whitelist /inbox for the ActivityPub communication, but I’m not sure how to do that in Cloudflare.
So they rewrote Nepenthes (or Iocaine, Spigot, Django-llm-poison, Quixotic, Konterfai, Caddy-defender, plus inevitably some Rust versions)
Edit, but with ✨AI✨ and apparently only true facts
It’s the consequences of the MIT and Apache licenses showing up in real time.
GPL your software, people!
Cloudflare is providing the service, not libraries
DNA Lounge has something similar - I think they even mentioned infinite JavaScript loops, and images that expand like zip-bombs.
I guess this is what the first iteration of the Blackwall looks like.
Gotta say “AI Labyrinth” sounds almost as cool.
I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”
Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?
Because you are coming from the perspective of a reasonable person
These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already
That’s what they are saying though. These shouldn’t be thought of as “rules”, they are suggestions near universally designed to point you to the most relevant content. Ignoring them isn’t “stealing something not meant to be captured”, it’s wasting time and resources of your own infra on something very likely to be useless to you.
They want everything, does it exist, but it’s not in their dataset? Then they want it.
They want their ai to answer any question you could possibly ask it. Filtering out what is and isn’t useful doesn’t achieve that
Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.
I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.
Imagine how much power is wasted on this unfortunate necessity.
Now imagine how much power will be wasted circumventing it.
Fucking clown world we live in
On on hand, yes. On the other…imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.
deleted by creator
I…uh…frick.
I just want to keep using uncensored AI that answers my questions. Why is this a good thing?
Because it only harms bots that ignore the “no crawl” directive, so your AI remains uncensored.
Good I ignore that too. I want a world where information is shared. I can get behind the
Get behind the what?
Perhaps an AI crawler crashed Melvin’s machine halfway through the reply, denying that information to everyone else!
That’s not what the
no follow
command meansdon’t worry, information is still shared. but with people. not with capitalist pigs
Capitalist pigs are paying media to generate AI hatred to help them convince you people to get behind laws that all limit info sharing under the guise of IP and copyright
Because it’s not AI, it’s LLMs, and all LLMs do is guess what word most likely comes next in a sentence. That’s why they are terrible at answering questions and do things like suggest adding glue to the cheese on your pizza because somewhere in the training data some idiot said that.
The training data for LLMs come from the internet, and the internet is full of idiots.
That’s what I do too with less accuracy and knowledge. I don’t get why I have to hate this. Feels like a bunch of cavemen telling me to hate fire because it might burn the food
Because we have better methods that are easier, cheaper, and less damaging to the environment. They are solving nothing and wasting a fuckton of resources to do so.
It’s like telling cavemen they don’t need fire because you can mount an expedition to the nearest valcanoe to cook food without the need for fuel then bring it back to them.
The best case scenario is the LLM tells you information that is already available on the internet, but 50% of the time it just makes shit up.
Wasteful?
Energy production is an issue. Using that energy isn’t. LLMs are a better use of energy than most of the useless shit we produce everyday.
Did the LLMs tell you that? It’s not hard to look up on your own:
Data centers, in particular, are responsible for an estimated 2% of electricity use in the U.S., consuming up to 50 times more energy than an average commercial building, and that number is only trending up as increasingly popular large language models (LLMs) become connected to data centers and eat up huge amounts of data. Based on current datacenter investment trends,LLMs could emit the equivalent of five billion U.S. cross-country flights in one year.
Far more than straightforward search engines that have the exact same information and don’t make shit up half the time.
LLM is a subset of AI
From the article it seems like they don’t generate a new labyrinth for every single time: Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."
So this showed up last week: https://github.com/raminf/RoboNope-nginx
Similar vibe, minus the AI.
I’m glad we’re burning the forests even faster in the name of identity politics.
Do you identify as retarded?
Well that was a swing and a miss, back to the dugout with you dumbass.
What has this anything to do with identity politics?
Don’t feed the troll
Cloudflare kind of real for this. I love it.
It makes perfect sense for them as a business, infinite automated traffic equals infinite costs and lower server stability, but at the same time how often do giant tech companies do things that make sense these days?
Kind of seems like they simply installed this dude’s tarpit from a few months ago
I’m sorry how does it make sense?
I introduce to you, the Trace Buster Buster!
If you’ve never seen the movie The Big Hit, it’s great.
So we’re burning fossil fuels and destroying the planet so bots can try to deceive one another on the Internet in pursuit of our personal data. I feel like dystopian cyberpunk predictions didn’t fully understand how fucking stupid we are…
They probably knew, but the truth is just boring and it’s funner to dramatize things, haha.
while allowing legitimate users and verified crawlers to browse normally.
What is a “verified crawler” though? What I worry about is, is it only big companies like Google that are allowed to have them now?
removed by mod
Any accessibility service will also see the “hidden links”, and while a blind person with a screen reader will notice if they wonder off into generated pages, it will waste their time too. Especially if they don’t know about such “feature” they’ll be very confused.
Also, I don’t know about you, but I absolutely have a use for crawling X, Google maps, Reddit, YouTube, and getting information from there without interacting with the service myself.
deleted by creator
removed by mod
I assume a crawler which adheres to robots.txt
I would love to think so. But the word “verified” suggests more.
IP verification is a not uncommon method for commercial crawlers
Cloudflare isn’t the best at blocking things. As long as your crawler isn’t horribly misconfigured you shouldn’t have much issues.
They used AI to destroy AI
That’s just BattleBots with a different name.
You’re not wrong.
Ok, I now need a screensaver that I can tie to a cloudflare instance that visualizes the generated “maze” and a bot’s attempts to get out.
You probably just should let an AI generate that.
They should program the actions and reactions of each system to actual battle bots and then televise the event for our entertainment.
Then get bored when it devolves into a wedge meta.
Somehow one of them still invents Tombstone.
Putting a chopped down lawnmower blade in front of a thing, and having it spin at harddrive speeds is honestly kinda terrifying…
No, it is far less environmentally friendly than rc bots made of metal, plastic, and electronics full of nasty little things like batteries blasting, sawing, burning and smashing one another to pieces.