Over half of all tech industry workers view AI as overrated::undefined
The other half does not know how people rate things.
There is a lot of marketing about how it’s going to disrupt every possible industry, but I don’t think that’s reasonable. Generative AI has uses, but I’m not totally convinced it’s going to be this insane omni-tool just yet.
whenever we have new technology, there will always be folks flinging shit on the walls to see what sticks. AI is no exception and you’re most likely correct that not every problem needs an AI-powered solution.
Sure, it is already changing some fields, and more and more fields are beginning to feel the impact in the coming decades. However, we’re still pretty far from a true GPAI, so letting the AI do all the work isn’t going to happen any time soon.
Garbage in, garbage out still applies here. If we don’t collect data in the appropriate way, you can’t expect to teach a model with that. Once we start collecting data with ML in mind, that’s when things start changing quickly. Currently, we have lots and lots of garbage data about everything, and that’s why we aren’t using AI to more.
The generative AI is great but I don’t expect it to fully replace human workers or not entirely. The current tech has a lot of limitations and I can see this tech as greatly improving the work productivity and handling some monotone and mundane tasks but it would require some sort of human supervision.
generative ai sucks ass at just about everything
I’ll try and remember that as it makes my job significantly easier
It does help for certain tasks but it’s definitely not a silver bullet. And it sucks seriously on some more niche topics which shows that for it to be effective it requires a huge and very good quality training dataset with a very low bias and this is in my opinion the hardest nut to crack.
Of course, because hype didn’t come from tech people, but content writers, designers, PR people, etc. who all thought they didn’t need tech people anymore. The moment ChatGPT started being popular I started getting debugging requests from few designers. They went there and asked it to write a plugin or a script they needed. Only problem was it didn’t really work like it should. Debugging that code was a nightmare.
I’ve seen few clever uses. Couple of our clients made a “chat bot” whose reference was their poorly written documentation. So you’d ask a bot something technical related to that documentation and it would decipher the mess. I still claim making a better documentation was a smarter move, but what do I know.
I remember when it first came out I asked it to help me write a MapperConfig custom strategy and the answer it gave me was so fantastically wrong - even with prompting - that I lost an afternoon. Honestly the only useful thing I’ve found for it is getting it to find potential syntax errors in terraform code that the plan might miss. It doesn’t even complement my programming skills like a traditional search engine can do; instead it assumes a solution that is usually wrong and you are left to try to build your house on the boilercode sand it spits out at you.
deleted by creator
I’ve found the free one can sometimes answer tip of my tongue questions but yeah anything even remotely obscure it will just lie and say that doesn’t exist, especially if you stray a little too close to the puritanical guard rails. One time I was going down a rabbit hole researching human sex organ variations and it flat out told me the people in South America who grow a penis at 12 don’t exist until I found the name guevedoces on my own, and wouldn’t you know it then it knew what I was talking about.
Have you used copilot? I find it to be fantastically useful.
I also have tried to use it to help with programming problems, and it is confidently incorrect a high percentage (50%) of the time. It will fabricate package names, functions, and more. When you ask it to correct itself, it will give another confidently incorrect answer. Do this a few more times and you could end up with it suggesting the first incorrect answer it gave you and then you realize it is literally leading you in circles.
It’s definitely a nice option to check something quickly, and it has given me some good information, but you really can’t blindly trust its output.
At least with programming, you can validate fairly quickly that it is giving bad information. With other real-life applications, using it for cooking/baking, or trip planning, the consequences of bad information could be quite a bit worse.
As with all tech; it depends. It’s another tool in my toolbox and a useful one at that. Will it replace me in my job? Not anytime soon. However, it will make me more proficient at my job and my 30+ years of experience will keep its bad ideas out of production. If my bosses decide tomorrow that I can be replaced with AI in the current state, they deserve what they have coming. That said, they are willing to pay for additional tooling provided me with multiple AI engines and I can’t be more thrilled. I’d rather give AI a simple task to do the busy work than work with overseas developers that get it wrong time and time again and take a week to iterate while asking how for loops work in Python.
I’m definitely looking forward to adding this as a tool: I’m in DevOps so have to jump back and forth among many different programming languages. It should real help to switch context faster
… somehow I’m one of the “experts” using JavaScript despite never learning or using it. Hooray for my search engine skills I guess!
Because it is?
Overrated? Compared to what AGI that does not exist yet? Overhyped though? Absolutely.
We went from very little AI content making its way to your eyeballs and ears, to it occurring daily if not during your very session here today. So many thumbnails and writeups have used AI that to say it is overrated it a bit absurd unless you were expecting it to be be AGI, then yes the AI today is overrated, but it does not matter as you are consuming it still.
deleted by creator
We went from very little AI content making its way to your eyeballs and ears, to it occurring daily if not during your very session here today.
And this AI content that you’re consuming, is that an improvement?
If not maybe it’s uh, what’s the word? Overrated.
It is for sure an improvement as for example would have been a very basic thumbnail is now something much more interesting in so many instances.
Lol look at the glorious thumbnails!
The text is dreadful. It’s somehow worse than the sweatshop content it replaced.
I asked chatGPT to generate a list of 5 random words, and then tell me the fourth word from the bottom. It kept telling me the third. I corrected it, and it gave me the right word. I asked it again, and it made the same error. It does amazing things while failing comically at simple tasks. There is a lot of procedural code added to plug the leaks. Doesn’t mean it’s overrated, but when something is hyped hard enough as being able to replace human expertise, any crack in the system becomes ammunition for dismissal. I see it more as a revolutionary technology going through evolutionary growing pains. I think it’s actually underrated in its future potential and worrisome in the fact that its processing is essentially a black box that can’t be understood at the same level as traditional coding. You can’t debug it or trace the exact procedure that needs patching.
I believe I saw this kind of issues was because of the token system. Like if you tell him to find a word starting with a letter, he can’t really do it without hard coded workaround, because he doesn’t know about single letters, only about tokens which are parts of the sentence.
It’s definitly more complicated than that, but it doesn’t mean AI is bad, only that this current implementation can’t do theses kind of task.There is a lot of procedural code added to plug the leaks.
It’s definitely feasible, like what they tried to do with Wolfram alpha- but do you have a source for this?
I work in AI, and I think AI is overrated.
Yeah… About that… How’s block chain going these days? Solved all the problems in the world yet?
Eh, AI has been around for years being the engine behind OCR and classification of documents. I worked in the finance space and saw some really good applications but not for the common person.
People who use ChatGPT to program for them deserve their programs to fail
Yeah , they should copy paste answers from stack overflow like real developers.
Real developers just hit tab on whatever copilot tells them to
Guilty
deleted by creator
You sound like someone who doesn’t know how to program and is allowing yourself to become hopelessly dependent on corporations to be able to do anything.
deleted by creator
removed by mod
Ty for this great pasta moment
“What the fuck did you just fucking say about me, you little bitch? I’ll have you know I’ve authored multiple FOSS libraries, and I’ve been involved in numerous secret application deployments on Production and I have over 300 confirmed commits. I am…”
I do so try
Giant markov chains are shit, not a hot take imo
deleted by creator
I agree, but you make it sound like a fancy chat bot can’t do amazing things. I don’t use any openAI products for moral reasons, but LLMs in general are amazing tools, and good entertainment.
deleted by creator
That’s probably the most humanlike aspect of them.
deleted by creator
Well they haven’t seized the means of production yet, so it can’t be Lemmy’s.
deleted by creator
Quite the opposite. People who understand how LLMs work know their limitations. And AI in general is incapable deduction and creativity. It simply is not able to produce something new by using existing knowledge. Sure it can generate a number of outputs through some transformations of input data. But not create.
If you think developers, engineers, architects and others are going to lose their jobs you are severely mistaken. Even for content writers it’s a temporary setback because AI generated content is just limited and as soon as quality of human input to same AI starts dropping in quality so will AI’s output.
deleted by creator
It’s a tool that you have to babysit, at least for foreseeable future. In general it’s always a bad idea for human to supervise the machine because in time we grow complacent of its results and that’s when the mistakes happen. When it tomes to writing some content, biggest problem is inaccuracies or some typo. Your comparison to CAD software is not a good one, since CAD doesn’t produce anything on its own. It’s a software assisting human, not generating content. Imagine the horror with CAD software auto-generated bridges. It would be only a matter of time before someone would just skip on double-checking what was generated. And I am fully aware there are AI generated structural parts and testing, but it’s a part of design process where results have to checked by a human again.
I do think AI has a place and purpose, but it’s not going to cost people their jobs, only help them do it more efficiently. It’s great in assisting people but not replacing. If there’s a manager out there who thinks AI can replace a human, then I can show you a bad manager who doesn’t understand what AI is. In the future we might arrive at a point in time where AI is good enough to do some jobs human find too repetitive or dangerous. But we are far from that.
Also, LLM is not something I’d call AI, or at least intelligent. They are specialized neural networks which are trained using human input and whose sole purpose is predicting the next word or sentence in relation to what’s entered as input. Glorified and overly complicated auto-complete. There’s no intelligence involved.
deleted by creator
That’s not exactly how I view outcome of introducing new tools, but that’s will have to be agree to disagree part. In my opinion tools remove tedious tasks completely or make them easier giving you more time to focus on what matters.