They used AI to destroy AI
This is getting ridiculous. Can someone please ban AI? Or at least regulate it somehow?
As for everything, it has good things, and bad things. We need to be careful and use it in a proper way, and the same thing applies to the ones creating this technology
Once a technology or even an idea is there, you can’t really make it go away - ai is here to stay. The generative LLM are just a small part.
The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.
Jokes on them. I’m going to use AI to estimate the value of content, and now I’ll get the kind of content I want, though fake, that they will have to generate.
Should have called it “Black ICE”.
Definitely falls under the category of a Trap ICE card.
I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”
Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?
They want everything, does it exist, but it’s not in their dataset? Then they want it.
They want their ai to answer any question you could possibly ask it. Filtering out what is and isn’t useful doesn’t achieve that
Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.
I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.
Because you are coming from the perspective of a reasonable person
These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already
I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.
I guess this is what the first iteration of the Blackwall looks like.
Gotta say “AI Labyrinth” sounds almost as cool.
“I used the AI to destroy the AI”
And consumed the power output of a medium country to do it.
Yeah, great job! 👍
We truly are getting dumber as a species. We’re facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people. All to better collect their data to sell products to them that will distract them from the climate disaster our corporations have caused. It’s really fun to watch if it wasn’t so sad.
We had to kill the internet, to save the internet.
We have to kill the Internet, to save humanity.
Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.
I think the negativity is around the unfortunate fact that solutions like this shouldn’t be necessary.
thousands of times a second
Modify your Nginx (or whatever web server you use) config to rate limit requests to dynamic pages, and cache them. For Nginx, you’d use either fastcgi_cache or proxy_cache depending on how the site is configured. Even if the pages change a lot, a cache with a short TTL (say 1 minute) can still help reduce load quite a bit while not letting them get too outdated.
Static content (and cached content) shouldn’t cause issues even if requested thousands of times per second. Following best practices like pre-compressing content using gzip, Brotli, and zstd helps a lot, too :)
Of course, this advice is just for “unintentional” DDoS attacks, not intentionally malicious ones. Those are often much larger and need different protection - often some protection on the network or load balancer before it even hits the server.
Burning 29 acres of rainforest a day to do nothing
It certainly sounds like they generate the fake content once and serve it from cache every time: “Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval.”
Yeah but you also add in the energy consumption of the data scrappers
Bitcoin?
Imagine how much power is wasted on this unfortunate necessity.
Now imagine how much power will be wasted circumventing it.
Fucking clown world we live in
From the article it seems like they don’t generate a new labyrinth for every single time: Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."
On on hand, yes. On the other…imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.
My dude, they’ll literally sell services to both sides of the market.
I…uh…frick.
I just want to keep using uncensored AI that answers my questions. Why is this a good thing?
Because it only harms bots that ignore the “no crawl” directive, so your AI remains uncensored.
Good I ignore that too. I want a world where information is shared. I can get behind the
Get behind the what?
Perhaps an AI crawler crashed Melvin’s machine halfway through the reply, denying that information to everyone else!
That’s not what the
no follow
command meansdon’t worry, information is still shared. but with people. not with capitalist pigs
Capitalist pigs are paying media to generate AI hatred to help them convince you people to get behind laws that all limit info sharing under the guise of IP and copyright
Because it’s not AI, it’s LLMs, and all LLMs do is guess what word most likely comes next in a sentence. That’s why they are terrible at answering questions and do things like suggest adding glue to the cheese on your pizza because somewhere in the training data some idiot said that.
The training data for LLMs come from the internet, and the internet is full of idiots.
LLM is a subset of AI
That’s what I do too with less accuracy and knowledge. I don’t get why I have to hate this. Feels like a bunch of cavemen telling me to hate fire because it might burn the food
Because we have better methods that are easier, cheaper, and less damaging to the environment. They are solving nothing and wasting a fuckton of resources to do so.
It’s like telling cavemen they don’t need fire because you can mount an expedition to the nearest valcanoe to cook food without the need for fuel then bring it back to them.
The best case scenario is the LLM tells you information that is already available on the internet, but 50% of the time it just makes shit up.
Wasteful?
Energy production is an issue. Using that energy isn’t. LLMs are a better use of energy than most of the useless shit we produce everyday.
And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.
Some of us are only here to crank hog.
AROOO!
So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.
There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you’ll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren’t a bot… and so will everyone else. You’ll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?
That’s just BattleBots with a different name.
No, it is far less environmentally friendly than rc bots made of metal, plastic, and electronics full of nasty little things like batteries blasting, sawing, burning and smashing one another to pieces.
You’re not wrong.
Ok, I now need a screensaver that I can tie to a cloudflare instance that visualizes the generated “maze” and a bot’s attempts to get out.
You probably just should let an AI generate that.
They should program the actions and reactions of each system to actual battle bots and then televise the event for our entertainment.
Then get bored when it devolves into a wedge meta.
Somehow one of them still invents Tombstone.
Putting a chopped down lawnmower blade in front of a thing, and having it spin at harddrive speeds is honestly kinda terrifying…
Considering how many false positives Cloudflare serves I see nothing but misery coming from this.
Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes “resemble” SQL injection. Nurses type all sorts of random stuff in their notes so there’s no managing that. Drives me insane!
In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.
Can’t you configure exceptions for behaviours?
I’m not sure what can be done at the free tier. There is a switch to turn on AI not blocking, and it breaks federation.
You can’t whitelist domains because federation could come from and domain. Maybe you could somehow whitelist /inbox for the ActivityPub communication, but I’m not sure how to do that in Cloudflare.