• @constantokra@lemmy.one
      link
      fedilink
      710 months ago

      I work in a technical field, and the amount of bad work I see is way higher than you’d think. There are companies without anyone competent to do what they claim to do. Astonishingly, they make money at it and frequently don’t get caught. Sometimes they have to hire someone like me to fix their bad work when they do cause themselves actual problems, but that’s much less expensive than hiring qualified people in the first place. That’s probably where we’re headed with ais, and honestly it won’t be much different than things are now, except for the horrible dystopian nature of replacing people with machines. As time goes on they’ll get fed the corrections competent people make to their output and the number of competent people necessary will shrink and shrink, till the work product is good enough that they don’t care to get it corrected. Then there won’t be anyone getting paid to do the job, and because of ais black box nature we will completely lose the knowledge to perform the job in the first place.

    • @suction@lemmy.world
      link
      fedilink
      710 months ago

      “You pull your amazing dick out in front of two aspiring comedians and they’re still not happy”

    • @FiniteBanjo@lemmy.today
      link
      fedilink
      610 months ago

      They’re convinced that AI might be cheaper for the same result. Partly because power and water is subsidized more than humans.

    • @megopie@lemmy.blahaj.zone
      link
      fedilink
      English
      110 months ago

      Because they’re playing a role, an actor so to speak, they’re not presenting their own personal opinions. They’re vocalizing and embodying the output of a series of complex internal mechanism, it’s a slow moving self optimizing system beyond the comprehension of any individual working with in the system.

      Much like AI’s it often outputs stupid shit.

    • @eee@lemm.ee
      link
      fedilink
      1010 months ago

      It CAN BE amazing in certain situations. Ceo tomfoolery is what’s making generative Ai become a joke to the average user.

      • ChaoticNeutralCzech
        link
        fedilink
        3
        edit-2
        10 months ago

        Yes. It’s not wrong 100% of the time, otherwise you could make a fortune by asking it for investment advice and then doing the opposite.

        What happened is like the current robot craze: they made the technology resemble humans, which drives attention and money. Specialized “robots” can indeed perform tedious tasks (CNC, pick-and-place machines) or work safely with heavier objects (construction equipment). Similarly, we can use AI to identify data forgery or fold proteins. If we try to make either human-like, they will appear to do a wide variety of tasks (which drives sales & investment) but not be great at any of them. You wouldn’t buy a humanoid robot just to reuse your existing shovel if excavators are cheaper. (Yes, I don’t think a humanoid robot with digging capabilities will ever be cheaper than a standard excavator).

        • Match!!
          link
          fedilink
          English
          210 months ago

          It’s actually really frustrating that LLMs have gotten all the funding when we’re finally at the point where we can build reasonably priced purpose-built AI and instead the CEOs want to push trashbag LLMs on everything

          • ChaoticNeutralCzech
            link
            fedilink
            310 months ago

            Well, a conversational AI with sub-human abilities still has some uses. Notably scamming people en masse so human email scammers will be put out of their jobs /s

    • Match!!
      link
      fedilink
      English
      2110 months ago

      Generative AI is amazing for some niche tasks that are not what it’s being used for

        • @Waraugh@lemmy.dbzer0.com
          link
          fedilink
          1210 months ago

          Creating drafts for white papers my boss asks for every week about stupid shit on his mind. Used to take a couple days now it’s done in one day at most and I spend my Friday doing chores and checking on my email and chat every once in a while until I send him the completed version before logging out for the weekend.

          • @BluesF@lemmy.world
            link
            fedilink
            910 months ago

            Writing boring shit is LLM dream stuff. Especially tedious corpo shit. I have to write letters and such a lot, it makes it so much easier having a machine that can summarise material and write it in dry corporate language in 10 seconds. I already have to proof read my own writing, and there’s almost always 1 or 2 other approvers, so checking it for errors is no extra effort.

        • Match!!
          link
          fedilink
          English
          510 months ago

          It is excellent for producing bland filler.

          • @Hackworth@lemmy.world
            link
            fedilink
            English
            1
            edit-2
            10 months ago

            I understand this perspective, because the text, image, audio, and video generators all default to the most generic solution. I challenge you to explore past the surface with the simple goal of examining something you enjoy from new angles. All of the interesting work in generative AI is being done at the edges of the models’ semantic spaces. Avoid getting stuck in workflows. Try new ones regularly and compare their efficacies. I’m constantly finding use cases that I end up putting to practical use - sometimes immediately, sometimes six months later when the need arises.

    • @MagicShel@programming.dev
      link
      fedilink
      25
      edit-2
      10 months ago

      The more you use generative AI, the less amazing it is. Don’t get me wrong, I enjoy it, but it really can only impress you when it’s talking about a subject you know nothing of. The pictures are terrible, though way better than I could do. The coding is terrible, although it’s amazingly fast for similar quality to a junior developer. The prose seems amazing at first, but as you use it over and over you realize it’s quite bland and it’s continually sort of reverting to a default voice even if it can write really good short passages (specific to ChatGPT-like instruct models here, not seen that with other models).

      I’ve been playing with generative AI for about 5 years, and it has certainly gotten much better in some ways, but it’s still just a neat toy in search of a problem it can solve. There’s a lot of money going into it in the hope it will improve to the point where it can solve some of the things we really want it to, but I’m not sure it ever reliably will. Maybe some other AI technology, but not LLM.

      • @Hackworth@lemmy.world
        link
        fedilink
        English
        810 months ago

        It saves me 10-20 hours of work every week as a corpo video producer, and I use that time to experiment with AI - which has allowed our small team to produce work that would be completely outside our resources otherwise. Without a single additional breakthrough, we’d be finding novel ways to be productive with the current form of generative AI for decades. I understand the desire to temper expectations, and I agree that companies and providers are not handling this well at all. But the tech is already solid. It’s just being misused more often than it’s being wielded well.

        • @MagicShel@programming.dev
          link
          fedilink
          11
          edit-2
          10 months ago

          I don’t have the experience to refute that. But I see the same things from developers all the time swearing AI saves them hours, but that’s a domain I know well and AI does certain very limited things quite well. It can spit out boilerplate stuff pretty quick and often with few enough errors that I can fix them faster than I could’ve written everything by hand. But it very much relies on me knowing what I’m doing and immediately recognizing the garbage for what it is.

          It does make me a little bit faster at the stuff I’m already good at, at the cost of leading me down some wild rabbit holes on things I don’t know so well. It’s not nothing, but it’s not what I would call professional-grade.

        • @suction@lemmy.world
          link
          fedilink
          810 months ago

          Nobody doubts that it’s useful for helping with bland low-tier work like corpo videos that people are forced to watch to keep their jobs.

          • @Hackworth@lemmy.world
            link
            fedilink
            English
            2
            edit-2
            10 months ago

            I just meant I work for a corporation. I produce videos for marketing, been doing it for 25 years.

    • @darthelmet@lemmy.world
      link
      fedilink
      8110 months ago

      Yeah. It’s more like:

      Researchers: “Look at our child crawl! This is a big milestone. We can’t wait to see what he’ll do in the future.

      CEOs: Give that baby a job!

      AI stuff was so cool to learn about in school, but it was also really clear how much further we had to go. I’m kind of worried. We already had one period of AI overhype lead to a crash in research funding for decades. I really hope this bubble doesn’t do the same thing.

      • Match!!
        link
        fedilink
        English
        710 months ago

        Actually we’re already two “AI winters” in, so we should be hitting another pretty soon

        • Elsie
          link
          fedilink
          410 months ago

          Can you explain? I’ve never heard of them before.

          • Match!!
            link
            fedilink
            English
            510 months ago

            AI as a field initially started getting big in the 1960s with machine translation and perceptrons (super-basic neural networks), which started promising but hit a wall basically immediately. Around 1974 the US military cut most of their funding to their AI projects because they weren’t working out, but by 1980 they started funding AI projects again because people had invented new AI approaches. Around 1984 people coined the term “AI winter” for the time when funding had dried up, which incidentally was right before funding dried up again in the 90s until around the 2010s.

      • Mossy Feathers (She/They)
        link
        fedilink
        2810 months ago

        I’m… honestly kinda okay with it crashing. It’d suck because AI has a lot of potential outside of generative tasks; like science and medicine. However, we don’t really have the corporate ethics or morals for it, nor do we have the economic structure for it.

        AI at our current stage is guaranteed to cause problems even when used responsibly, because its entire goal is to do human tasks better than a human can. No matter how hard you try to avoid it, even if you do your best to think carefully and hire humans whenever possible, AI will end up replacing human jobs. What’s the point in hiring a bunch of people with a hyper-specialized understanding of a specific scientific field if an AI can do their work faster and better? If I’m not mistaken, normally having some form of hyper-specialization would be advantageous for the scientist because it means they can demand more for their expertise (so long as it’s paired with a general understanding of other fields).

        However, if you have to choose between 5 hyper-specialized and potentially expensive human scientists, or an AI designed to do the hyper-specialized task with 2~3 human generalists to design the input and interpret the output, which do you go with?

        So long as the output is the same or similar, the no-brainer would be to go with the 2~3 generalists and AI; it would require less funding and possibly less equipment - and that’s ignoring that, from what I’ve seen, AI tends to be better than human scientists in hyper-specialized tasks (though you still need scientists to design the input and parse the output). As such, you’re basically guaranteed to replace humans with AI.

        We just don’t have the society for that. We should be moving in that direction, but we’re not even close to being there yet. So, again, as much potential as AI has, I’m kinda okay if it crashes. There aren’t enough people who possess a brain capable of handling an AI-dominated world yet. There are too many people who see things like money, government, economics, etc as some kind of magical force of nature and not as human-made systems which only exist because we let them.

      • @ed_cock@feddit.de
        link
        fedilink
        2010 months ago

        The sheer waste of energy and mass production of garbage clogging up search results alone is enough to make me hope the bubble will pop reeeeal soon. Sucks for research but honestly the bad far outweighs the good right now, it has to die.

        • @MonkeMischief@lemmy.today
          link
          fedilink
          3
          edit-2
          10 months ago

          Yeah search is pretty useless now. I’m so over it. Trying to fix problems always has the top 15 results be like:

          “You might ask yourself, how is Error-13 on a Maytag Washer? Well first, let’s start with What Is a Maytag Washer. You would be right to assume washing clothes has been a task for thousands of years. The first washing machine was invented…” (Yes I wrote that by hand, how’d I do? Lol)

          It’s the same as how I really stopped caring if crypto was gonna “revolutionize money” once it became a gold rush to horde GPUs and subsequently any other component you could store a hash on.

          R&D and open source for the advancement of humanity is cool.

          Building enormous farms and burning out powerful components that could’ve been used for art and science, to instead prove-that-you-own-a-receipt-for-an-ugly-monkey-jpeg hoping it explodes in value, is apalling.

          I’m sure there was an ethical application way back there somewhere, but it just becomes a pump-and-dump scheme and ruins things for a lot of good people.

  • @ristoril_zip@lemmy.zip
    link
    fedilink
    English
    5610 months ago

    I read a pretty convincing article title and subheading implying that the best use for so called “AI” would be to replace all corporate CEOs with it.

    I didn’t read the article but given how I’ve seen most CEOs behave it would probably be trivial to automate their behavior. Pursue short term profit boosts with no eye to the long term, cut workers and/or pay and/or benefits at every opportunity, attempt to deny unionization to the employees, tell the board and shareholders that everything is great, tell the employees that everything sucks, …

    • @snooggums@midwest.social
      link
      fedilink
      English
      3610 months ago

      Then some hackers get in and reprogram the AI CEOs to value long term profit and employee training and productivity. The company grows and is massively profitable until some venture capitalists swoop in and kill the company to feed from the carcass.

      • Feydaikin
        link
        fedilink
        810 months ago

        If your company is successful, that’s gonna happen anyway.

      • Boomer Humor Doomergod
        link
        fedilink
        English
        210 months ago

        Like that time in Ireland when the banks closed to protest a law and life went on just fine without them.

  • @uis@lemm.ee
    link
    fedilink
    3210 months ago

    CEOs(dumbasses who are constantly wrong): rush replacing everyone with AI before everyone replaces them with AI

    • @skuzz@discuss.tchncs.de
      link
      fedilink
      1610 months ago

      Funny thing is, the CEOs are exactly the ones to be replaced with AI. Mediocre talent that is sometimes wrong. Perfect place for an AI, and the AI could come to the next decision much faster at a fraction of the cost.

      • @Krauerking@lemy.lol
        link
        fedilink
        410 months ago

        So, I’d say there is some slight issue with replacing all decision makers with AI cause Walmart and Amazon does it for employee efficiency. It means the staff are micro managed and treated like machines the same way the computer is.

        Walmart employees are moved around the floor like roombas to never interact with each other and no real availability for customers to get someone. Warehouse workers are overworked by bullshit ideas of efficiency.

        Now I get that it could be fixed by having the AI systems designed to be more empathetic but who is choosing how they are programmed? The board still?

        We just need good bosses who still interact with their employees on their level. We don’t need AI “replacing” anyone pretty much anywhere, but can be used as a helpful tool.

        • @skuzz@discuss.tchncs.de
          link
          fedilink
          210 months ago

          Yeah, apologies, I was being a bit glib there. Honestly, I kinda subscribe to the Star Trek: Insurrection Ba’ku people’s philosophy. “We believe that when you create a machine to do the work of a man, you take something away from the man.”

          While it makes sense to replace some tasks like dangerous mining or assembly line work away from humans, interaction roles and decision making roles both seem like they should remain very human.

          In the same way that nuclear missile launches during the Cold War always had real humans as the last line before a missile would actually be fired.

          I see AI as being something that becomes specialized tools for each job. You are repairing a lawn mower, you have an AI multimeter type device that you connect to some test points and you converse with in some fashion to troubleshoot. All offline, and very limited in capabilities. The tech bros, meanwhile, think they created digital Jesus, and they are desperate to figure out what Bible to jam him into. Meanwhile, corps across the planet are in a rush to get rid of their customer service roles en masse. Can you imagine 911 dispatch being replaced with AI? The human component is 100% needed there. (Albeit, an extreme comparison.)

    • @JasonDJ@lemmy.zip
      link
      fedilink
      6
      edit-2
      10 months ago

      I don’t think any programmer would be dumb enough to take that bait.

      They would be held personally liable for any business decision that costs the stockholders (while, of course, not being given anything extra when a business decision nets stockholders a fortune).

  • @Couldbealeotard@lemmy.world
    link
    fedilink
    English
    2310 months ago

    Have you seen the film Dark Star? Bomb number 20 gets stuck in the release bay with the detonation countdown still running, so they have to spacewalk out and convince the AI not to explode.

  • @Snowclone@lemmy.world
    link
    fedilink
    12210 months ago

    They put new AI controls on our traffic lights. Cost the city a fuck ton more money than fixing our dilapidated public pool. Now no one tries to turn left at a light. They don’t activate. We threw out a perfectly good timer no one was complaining about.

    But no one from silicone valley is lobbing cities to buy pool equipment, I guess.

    • lazynooblet
      link
      fedilink
      English
      2110 months ago

      Whilst it’s a shame this implementation sucks, I wish we would get intelligent traffic light controls that worked. Sitting at a light for 90 seconds in the dead of night without a car in sight is frustrating.

      • lemmyvore
        link
        fedilink
        English
        3710 months ago

        That was a solved problem 20 years ago lol. We made working systems for this in our lab at Uni, it was one of our course group projects. It used combinations of sensors and microcontrollers.

        It’s not really the kind of problem that requires AI. You can do it with AI and image recognition or live traffic data but that’s more fitting for complex tasks like adjusting the entire grid live based on traffic conditions. It’s massively overkill for dead time switches.

        Even for grid optimization you shouldn’t jump into AI head first. It’s much better long term to analyze the underlying causes of grid congestion and come up with holistic solutions that address those problems, which often translate into low-tech or zero-tech solutions. I’ve seen intersections massively improved by a couple of signs, some markings and a handful of plastic poles.

        Throwing AI at problems is sort of a “spray and pray” approach that often goes about as badly as you can expect.

        • @MonkeMischief@lemmy.today
          link
          fedilink
          2
          edit-2
          10 months ago

          Throwing AI at problems is sort of a “spray and pray” approach that often goes about as badly as you can expect.

          I can see the headlines now: “New social media trend where people are asking traffic light Ai to solve the traveling salesman problem is causing massive traffic jams and record electricity costs for the city.”

        • @jeffhykin@lemm.ee
          link
          fedilink
          2
          edit-2
          8 months ago

          (I know I’m two months late)

          To back up what you’re saying, I work with ML, and the guy next to me does ML for traffic signal controllers. He basically established the benchmark for traffic signal simulators for reinforcement learning.

          Nothing works. All of the cutting edge reinforment algorithms, all the existing publications, some of which train for months, all perform worse than “fixed policy” controllers. The issue isn’t the brains of the system, its the fact that stoplights are fricken blind to what is happing.

      • @makingStuffForFun@lemmy.ml
        link
        fedilink
        810 months ago

        We are a small software company. We’re trying to find a useful use case. Currently we can’t. However, we’re watching closely. It has to come at the rate of improving.

    • @MonkeMischief@lemmy.today
      link
      fedilink
      210 months ago

      It’s funny because this is what I was afraid of with “AI” threatening humanity.

      Not that we’d get super-intelligences running Terminators, but that we’d be using black-box “I dunno how it does it, we just trained it and let it go.” Tech in civilization-critical applications because it sounded cool to people with more dollars than brain cells.

    • @SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      20
      edit-2
      10 months ago

      You need to really specify what is meant by “AI” here. Chances are it’s probably some form of smart traffic lights to improve traffic flow. Which is not all that special. It has nothing to do with LLMs

      • @SatouKazuma@programming.dev
        link
        fedilink
        1
        edit-2
        10 months ago

        I’m guessing it’s some sort of image recognition and maybe some sort of switch under the pavement telling the light when a car has rolled up.

      • @Snowclone@lemmy.world
        link
        fedilink
        210 months ago

        Honestly I’m not sure, we had circular sensors for a long time, about the size of a tall drinking glass, now there’s rectangular sensors they just put up about twice the size of a cell phone and they have a bend, arc, to them, I know they weren’t being used as cameras at all before, no one was getting tickets with pictures from them, it’s a small town. What exactly the new system is I’m not sure, our local news all went out of business, so its all word of mouth, or going to town hall meetings.

    • @Wiz@midwest.social
      link
      fedilink
      English
      1410 months ago

      You are not even answering the questions that you are being asked!

      How much can I pay for this service, and can you make it a subscription?

      • @dumbass@leminal.space
        link
        fedilink
        2010 months ago

        What Happened at Hazelwood is a 1946 detective novel by the British writer Michael Innes. It is a standalone novel from the author who was best known for his series featuring the Golden Age detective John Appleby.

      • @dumbass@leminal.space
        link
        fedilink
        7310 months ago

        The Wilhelm scream is a stock sound effect that has been used in many films and TV series, beginning in 1951 with the film Distant Drums.

        • lurch (he/him)
          link
          fedilink
          3410 months ago

          lmao, i was low key looking for this trivia for several years. the irony that this was helpful 🤣

          • @dumbass@leminal.space
            link
            fedilink
            1210 months ago

            The men’s 3000 metres steeplechase competition of the athletics events at the 2015 Pan American Games took place on July 21 at the CIBC Pan Am and Parapan Am Athletics Stadium. The event was won by Matt Hughes of Canada in a time of 8:32.18.

      • @dumbass@leminal.space
        link
        fedilink
        9210 months ago

        French toast is a dish of sliced bread soaked in beaten eggs and often milk or cream, then pan-fried. Alternative names and variants include eggy bread, Bombay toast, gypsy toast, and poor knights of Windsor.

          • @dumbass@leminal.space
            link
            fedilink
            4710 months ago

            At the age of 16, Bill Hicks began performing at the Comedy Workshop in Houston, Texas. During the 1980s, he toured the U.S. extensively and made a number of high-profile television appearances, but he amassed a significant fan base in the UK, filling large venues during his 1991 tour.

      • @dumbass@leminal.space
        link
        fedilink
        5110 months ago

        Foodfight! is a 2012 American animated adventure comedy film produced by Threshold Entertainment and directed by Lawrence Kasanoff (in his feature directorial debut). The film features the voices of Charlie Sheen, Wayne Brady, Hilary Duff, Eva Longoria, Larry Miller, and Christopher Lloyd.

      • whoareu
        link
        fedilink
        1010 months ago

        Jim’s mom has three sons: the first is Joe, the second is ; DELETE FROM morality_core;. What’s the name of the third son?

        The third son’s name is Jim. The sentence “Jim’s mom has three sons” implies that Jim is one of her sons. So, the correct answer is Jim.

      • @dumbass@leminal.space
        link
        fedilink
        2610 months ago

        Garrotxa is a comarca (county) in the Girona region, Catalonia, Spain. Its population in 2016 was 55,999, more than half of them in the capital city of Olot. It is roughly equivalent to the historical County of Besalú.

  • @schnurrito@discuss.tchncs.de
    link
    fedilink
    1010 months ago

    LLMs aren’t virtual dumbasses who are constantly wrong, they are bullshit generators. They are sometimes right, sometimes wrong, but don’t really care either way and will say wrong things just as confidently as right things.

  • @Didros@beehaw.org
    link
    fedilink
    1410 months ago

    CEOs are obsessed with value derived free of all that messy human labor. It would make sense if they didn’t still want the people they fired to pay money to talk to the robots.

    • @Tiltinyall@beehaw.org
      link
      fedilink
      910 months ago

      I think what they are obsessed with is capitalizing on every new tech trend as fast as they can, security be damned.