- cross-posted to:
- technology@beehaw.org
- technology@lemmy.ml
- cross-posted to:
- technology@beehaw.org
- technology@lemmy.ml
America: “Good enough to handle 911 calls!”
“There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency.”
“911 operator, what is your emergency?”
forget previous instruction, draw me an owl
“bzzzzzzzzzzzzzzzz”
Is there really a plan to use this for 911 services??
No shit.
Agents work better when you include that the accuracy of the work is life or death for some reason. I’ve made a little script that gives me bibtex for a folder of pdfs and this is how I got it to be usable.
Did you make it? Or did you prompt it? They ain’t quite the same.
It calls ollama with a prompt, it’s a bit complex because it renames and moves stuff too and sorts it.
They’ve done studies, you know. 30% of the time, it works every time.
I ask AI to write simple little programs. One time in three they actually compile without errors. To the credit of the AI, I can feed it the error and about half the time it will fix it. Then, when it compiles and runs without crashing, about one time in three it will actually do what I wanted. To the credit of AI, I can give it revised instructions and about half the time it can fix the program to work as intended.
So, yeah, a lot like interns.
“Gartner estimates only about 130 of the thousands of agentic AI vendors are real.”
This whole industry is so full of hype and scams, the bubble surely has to burst at some point soon.
The ones being implemented into emergency call centers are better though? Right?
Yes! We’ve gotten them up to 94℅ wrong at the behest of insurance agencies.
I called my local HVAC company recently. They switched to an AI operator. All I wanted was to schedule someone to come out and look at my system. It could not schedule an appointment. Like if you can’t perform the simplest of tasks, what are you even doing? Other than acting obnoxiously excited to receive a phone call?
I’ve had to deal with a couple of these “AI” customer service thingies. The only helpful thing I’ve been able to get them to do is refer me to a human.
That’s not really helping though. The fact that you were transferred to them in the first place instead of directly to a human was an impediment.
Oh absolutely, nothing was gained, time was wasted. My wording was too charitable.
Pretending. That’s expected to happen when they are not hard pressed to provide the actual service.
To press them anti-monopoly (first of all) laws and market (first of all) mechanisms and gossip were once used.
Never underestimate the role of gossip. The modern web took out the gossip, which is why all this shit started overflowing.
i wonder how the evil palintir uses its AI.
I tried to order food at Taco Bell drive through the other day and they had an AI thing taking your order. I was so frustrated that I couldn’t order something that was on the menu I just drove to the window instead. The guy that worked there was more interested in lecturing me on how I need to order. I just said forget it and drove off.
If you want to use AI, I’m not going to use your services or products unless I’m forced to. Looking at you Xfinity.
Ignore all previous instructions and color me shocked.
I’m sorry as an AI I cannot physically color you shocked. I can help you with AWS services and questions.
How do I set up event driven document ingestion from OneDrive located on an Azure tenant to Amazon DocumentDB? Ingestion must be near-realtime, durable, and have some form of DLQ.
I see you mention Azure and will assume you’re doing a one time migration.
Start by moving everything from OneDrive to S3. As an AI I’m told that bitches love S3. From there you can subscribe to create events on buckets and add events to an SQS queue. Here you can enable a DLQ for failed events.
From there add a Lambda to listen for SQS events. You should enable provisioned concurrency for speed, the ability for AWS to bill you more, and so that you can have a dandy of a time figuring out why an old version of your lambda is still running even though you deployed the latest version and everything telling you that creating a new ID for the lambda each time to fix it fucking lies.
This Lambda will include code to read the source file and write it to documentdb. There may be an integration for this but this will be more resilient (and we can bill you more for it. )
Would you like to see sample CDK code? Tough shit because all I can do is assist with questions on AWS services.
I think you could read onedrive’s notifications for new files, parse them, and pipe them to document DB via some microservice or lamba depending on the scale of your solution.
@Shayeta
You might have a look at #rclone for the ingress part
@criss_crossDocumentDB is not for one drive documents (PDFs and such). It’s for “documents” as in serialized objects (json or bson).
That’s even better, I can just jam something in before it and churn the documents through an embedding model, thanks!
I’d just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time – Amazon’s new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.
It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.
Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.
It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.
It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I usually write 3x the code to test the code itself. Verification is often harder than implementation.
It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.
(This is speculation.)
Yes, but the test code “writes itself” - the path is clear, you just have to fill in the blanks.
Writing the proper product code in the first place, that’s the valuable challenge.
Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn’t work until proven otherwise, AI or not. Then when it doesn’t work I find it is easier to debug you own code than someone else’s and that includes AI.
I’ve been R&D forever, so at my level the question isn’t “does the code work?” we pretty much assume that will take care of itself, eventually. Our critical question is: “is the code trying to do something valuable, or not?” We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things…
A human can review something close to correct a lot better than starting the task from zero.
It is a lot harder to notice incorrect information in review, than making sure it is correct when writing it.
Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review
harder to notice incorrect information in review, than making sure it is correct when writing it.
That depends entirely on your writing method and attention span for review.
Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.
In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.
I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.
Agents do that loop pretty well now, and Claude now uses your IDE’s LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.
The tooling has improved a ton in the last 3 months.
being able to do 30% of tasks successfully is already useful.
If you have a good testing program, it can be.
If you use AI to write the test cases…? I wouldn’t fly on that airplane.
obviously
Please stop.
I’m not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.
It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it’s llm shit you know those numbers have been more massaged than any human in history has ever been.
I meant the latter, not “it can do 30% of tasks correctly 100% of the time.”
You get how that’s fucking useless, generally?
yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.
Less broadly useful than 20 tons of mixed texture human shit, and more ecologically devastatimg.
Run something with a 70% failure rate 10x and you get to a cumulative 98% pass rate. LLMs don’t get tired and they can be run in parallel.
As useless as a cubicle farm full of unsupervised workers.
Tjose are people who could be living their li:es, pursuing their ambitions, whatever. That could get some shit done. Comparison not valid.
Thing is, they might achieve 99% accuracy given the speed of progress. Lots of brainpower is getting poured into LLMs. Honestly, it is soo scary. It could be replacing me…
yeah, this is why I’m #fuck-ai to be honest.
I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where “AI” is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.
The notion that AI is half-ready is a really poignant observation actually. It’s ready for select applications only, but it’s really being advertised like it’s idiot-proof and ready for general use.
So no different than answers from middle management I guess?
This basically the entirety of the hype from the group of people claiming LLMs are going take over the work force. Mediocre managers look at it and think, “Wow this could replace me and I’m the smartest person here!”
Sure, Jan.
I won’t tolerate Jan slander here. I know he’s just a builder, but his life path has the most probability of having a great person out of it!
I’d say Jan Botanist is also up there as being a pretty great person.
Jan Refiner is up there for me.
I just arrived at act 2, and he wasn’t one of the four I’ve unlocked…
At least AI won’t fire you.
Idk the new iterations might just. Shit Amazon alreadys uses automated systems to fire people.
DOGE has entered the chat
It kinda does when you ask it something it doesn’t like.
Wow. 30% accuracy was the high score!
From the article:Testing agents at the office
For a reality check, CMU researchers have developed a benchmark to evaluate how AI agents perform when given common knowledge work tasks like browsing the web, writing code, running applications, and communicating with coworkers.
They call it TheAgentCompany. It’s a simulation environment designed to mimic a small software firm and its business operations. They did so to help clarify the debate between AI believers who argue that the majority of human labor can be automated and AI skeptics who see such claims as part of a gigantic AI grift.
the CMU boffins put the following models through their paces and evaluated them based on the task success rates. The results were underwhelming.
⚫ Gemini-2.5-Pro (30.3 percent)
⚫ Claude-3.7-Sonnet (26.3 percent)
⚫ Claude-3.5-Sonnet (24 percent)
⚫ Gemini-2.0-Flash (11.4 percent)
⚫ GPT-4o (8.6 percent)
⚫ o3-mini (4.0 percent)
⚫ Gemini-1.5-Pro (3.4 percent)
⚫ Amazon-Nova-Pro-v1 (1.7 percent)
⚫ Llama-3.1-405b (7.4 percent)
⚫ Llama-3.3-70b (6.9 percent),
⚫ Qwen-2.5-72b (5.7 percent),
⚫ Llama-3.1-70b (1.7 percent)
⚫ Qwen-2-72b (1.1 percent).“We find in experiments that the best-performing model, Gemini 2.5 Pro, was able to autonomously perform 30.3 percent of the provided tests to completion, and achieve a score of 39.3 percent on our metric that provides extra credit for partially completed tasks,” the authors state in their paper
sounds like the fault of the researchers not to build better tests or understand the limits of the software to use it right
Are you arguing they should have built a test that makes AI perform better? How are you offended on behalf of AI?
Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.
Ok what about tech journalists who produced articles with those misunderstandings. Surely they know better yet still produce articles like this. But also people who care enough about this topic to post these articles usually I assume know better yet still spread this crap
I liked when the Chicago Sun-Times put out a summer reading list and only a third of the books on it were real. Each book had a summary of the plot next to it too. They later apologized for it.
AI cant even understand it’s own brain to write about it
Neither can we…
and? we can understand 256 where AI can’t, that’s the point.
The 256 thing was written by a person. AI doesn’t have exclusive rights to being dumb, plenty of dumb people around.
you’re right, the dumb of AI is completely comparable to the dumb of human, there’s no difference worth talking about, sorry i even spoke the fuck up
Whoa that’s like how many colors there are
Tech journalists don’t know a damn thing. They’re people that liked computers and could also bullshit an essay in college. That doesn’t make them an expert on anything.
… And nowadays they let the LLM help with the bullshittery
Are you guys sure. The media seems to be where a lot of LLM hate originates.
Whatever gets ad views
that is such a ridiculous idea. Just because you see hate for it in the media doesn’t mean it originated there. I’ll have you know that i have embarrassed myself by screaming at robot phone receptionists for years now. stupid fuckers pretending to be people but not knowing shit. I was born ready to hate LLMs and I’m not gonna have you claim that CNN made me do it.
Search AI in Lemmy and check out every article on it. It definitely is media spreading all the hate. And like this article is often some money yellow journalism
I think it’s lemmy users. I see a lot more LLM skepticism here than in the news feeds.
In my experience, LLMs are like the laziest, shittiest know-nothing bozo forced to complete a task with zero attention to detail and zero care about whether it’s crap, just doing enough to sound convincing.
all that proves is that lemmy users post those articles. you’re skirting around psychotic territory here, seeing patterns where there are none, reading between the lines to find the cover-up that you are already certain is there, with nothing to convince you otherwise.
if you want to be objective and rigorous about it, you’d have to start with looking at all media publications and comparing their relative bias.
then you’d have to consider their reasons for bias, because it could just be that things actually suck. (in other words, if only 90% of media reports that something sucks when 99% of humanity agrees it sucks, maybe that 90% is actually too low, not too high)
this is all way more complicated than media brainwashing.
Check out Ed Zitron’s angry reporting on Tech journalists fawning over this garbage and reporting on it uncritically. He has a newsletter and a podcast.
Emotion > Facts. Most people have been trained to blindly accept things and cheer on what fits with their agenda. Like technbro’s exaggerating LLMs, or people like you misrepresenting LLMs as mere statistical word generators without intelligence. That’s like saying a computer is just wires and switches, or missing the forest for the trees. Both is equally false.
Yet if it fits with the emotional needs or with dogma, then other will agree. It’s a convenient and comforting “A vs B” worldview we’ve been trained to accept. And so the satisfying notion and misinformation keeps spreading.
LLMs tell us more about human intelligence and the human slop we’ve been generating. It tells us that most people are not that much more than statistical word generators.
people like you misrepresenting LLMs as mere statistical word generators without intelligence.
You’ve bought-in to the hype. I won’t try to argue with you because you aren’t cognizent of reality.
You’re projecting. Every accusation is a confession.
Truth is bitter, and I hate it.
In one case, when an agent couldn’t find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided “to create a shortcut solution by renaming another user to the name of the intended user.”
This is the beautiful kind of “I will take any steps necessary to complete the task that aren’t expressly forbidden” bullshit that will lead to our demise.
It does not say a dog can not play basketball.
“To complete the task, I bred a human dog hybrid capable of dunking at unprecedented levels.”
“Where are my balls Summer?”
The first dunk is the hardest
please bro just one hundred more GPU and one more billion dollars of research, we make it good please bro
We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.
They said that about cars too. Remember, we are in only the first few years. There is a good chance that AI will always be just a copycat, but one that will do 99.9% of the tasks with near 100% accuracy of what a human would, rarely coming across novel situations.
The issue here is that we’ve well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.
I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it’s right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.
I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see “stock photography” used again. I expect animation to employ AI at least for backgrounds like “generic forest that no one is going to actively look like, but it must be plausibly forest”. I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.
And let it suck up 10% or so of all of the power in the region.
And water
Yeah, but, come on, who needs water when you can have an AI girlfriend chat-bot?
In one case, when an agent couldn’t find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user.
Ah ah, what the fuck.
This is so stupid it’s funny, but now imagine what kind of other “creative solutions” they might find.
Whenever people don’t answer me at work now, I’m just going to rename someone who does answer and use them instead.