US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

  • @sheetzoos@lemmy.world
    link
    fedilink
    English
    152 days ago

    New technologies are not the issue. The problem is billionaires will fuck it up because they can’t control their insatiable fucking greed.

    • ☂️-
      link
      fedilink
      English
      5
      edit-2
      1 day ago

      exactly. we could very well work less hours with the same pay. we wouldnt be as depressed and angry as we are right now.

      we just have to overthrow, what, like 2000 people in a given country?

  • Suite404
    link
    fedilink
    English
    02 days ago

    It should. We should have radically different lives today because of technology. But greed keeps us in the shit.

  • @EndlessNightmare@reddthat.com
    link
    fedilink
    English
    182 days ago

    AI has it’s place, but they need to stop trying to shoehorn it into anything and everything. It’s the new “internet of things” cramming of internet connectivity into shit that doesn’t need it.

    • @poopkins@lemmy.world
      link
      fedilink
      English
      72 days ago

      You’re saying the addition of Copilot into MS Paint is anything short of revolutionary? You heretic.

  • @kreskin@lemmy.world
    link
    fedilink
    English
    102 days ago

    Its just going to help industry provide inferior services and make more profit. Like AI doctors.

  • @TommySoda@lemmy.world
    link
    fedilink
    English
    1373 days ago

    If it was marketed and used for what it’s actually good at this wouldn’t be an issue. We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people’s jobs easier and achieve better results. I understand its uses and that it’s not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.

    • @faltryka@lemmy.world
      link
      fedilink
      English
      303 days ago

      The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.

      • Pennomi
        link
        fedilink
        English
        53 days ago

        Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.

        For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.

        And if you run the AI locally, you can also be free of paying a subscription to a big AI company.

        • @einkorn@feddit.org
          link
          fedilink
          English
          13 days ago

          Except, no employer will allow you to use your own AI model. Just like you can’t bring your own work equipment (which in many regards even is a good thing) companies will force you to use their specific type of AI for your work.

          • Pennomi
            link
            fedilink
            English
            33 days ago

            Presumably “small business” means self-employed or other employee-owned company. Not the bureaucratic nightmare that most companies are.

      • @ferb@sh.itjust.works
        link
        fedilink
        English
        213 days ago

        This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?

        • AceofSpades
          link
          fedilink
          English
          43 days ago

          This vision of the AI making everything easier always leaves out the part where nobody has a job as a result.

          Sure you can relax on a beach, you have all the time in the world now that you are unemployed. The disconnect is mind boggling.

    • @count_dongulus@lemmy.world
      link
      fedilink
      English
      103 days ago

      Mayne pedantic, but:

      Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.

      • @Zorque@lemmy.world
        link
        fedilink
        English
        213 days ago

        And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.

        The problems are systemic, not individual.

        • @MangoCats@feddit.it
          link
          fedilink
          English
          63 days ago

          Shareholders only care about the value of their shares increasing. It’s a productive arrangement, up to a point, but we’ve gotten too good at ignoring and externalizing the human, environmental, and long term costs in pursuit of ever increasing shareholder value.

    • @MangoCats@feddit.it
      link
      fedilink
      English
      -23 days ago

      We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors.

      That’s an opinion - one I share in the vast majority of cases, but there’s a lot of art work that AI really can do “good enough” for the purpose that we really should be freeing up the human artists to do the more creative work. Writers, if AI is turning out acceptable copy (which in my experience is: almost never so far, but hypothetically - eventually) why use human writers to do that? And so on down the line.

      The problem is that capitalism and greedy CEOs are hyping the technology as the next big thing, looking for a big boost in their share price this quarter, not being realistic about how long it’s really going to take to achieve the things they’re hyping.

      “Artificial Intelligence” has been 5-10 years off for 40 years. We have seen amazing progress in the past 5 years as compared to the previous 35, but it’s likely to be 35 more before half the things that are being touted as “here today” are actually working at a positive value ROI. There are going to be more than a few more examples like the “smart grocery store” where you just put things in your basket and walk out and you get charged “appropriately” supposedly based on AI surveillance, but really mostly powered by low cost labor somewhere else on the planet.

  • snooggums
    link
    fedilink
    English
    133 days ago

    Experts are working from their perspective, which involves being employed to know the details of how the AI works and the potential benefits. They are invested in it being successful as well, since they spent the time gaining that expertise. I would guess a number of them work in fields that are not easily visible to the public, and use AI systems in ways the public never will because they are focused on things like pattern recognition on virii or idendifying locations to excavate for archeology that always end with a human verifying the results. They use AI as a tool and see the indirect benefits.

    The general public’s experience is being told AI is a magic box that will be smarter than the average person, has made some flashy images and sounds more like a person than previous automated voice things. They see it spit out a bunch of incorrect or incoherent answers, because they are using it the way it was promoted, as actually intelligent. They also see this unreliable tech being jammed into things that worked previously, and the negative outcome of the hype not meeting the promises. They reject it because how it is being pushed onto the public is not meeting their expectations based on advertising.

    That is before the public is being told that AI will drive people out of their jobs, which is doubly insulting when it does a shitty job of replacing people. It is a tool, not a replacement.

  • IndiBrony
    link
    fedilink
    English
    433 days ago

    The first thing seen at the top of WhatsApp now is an AI query bar. Who the fuck needs anything related to AI on WhatsApp?

      • @alphabethunter@lemmy.world
        link
        fedilink
        English
        62 days ago

        Lots of people. I need it because it’s how my clients at work prefer to communicate with me, also how all my family members and friends communicate.

    • @sgtgig@lemmy.world
      link
      fedilink
      English
      42 days ago

      Android Messages and Facebook Messenger also pushed in AI as ‘something you can chat with’

      I’m not here to talk to your fucking chatbot I’m here to talk to my friends and family.

    • @alphabethunter@lemmy.world
      link
      fedilink
      English
      103 days ago

      Right?! It’s literally just a messenger, honestly, all I expect from it is that it’s an easy and reliable way of sending messages to my contacts. Anything else is questionable.

        • @alphabethunter@lemmy.world
          link
          fedilink
          English
          22 days ago

          Yes, there are. You just have to live in one of the many many countries in the world where the overwhelming majority of the population uses whatsapp as their communication app. Like my country. Where not only friends and family, but also businesses and government entities use WhatsApp as their messaging app. I have at least a couple hundred reasons to use WhatsApp, including all my friends, all my family members, and all my clients at work. Do I like it? Not really. Do I have a choice? No. Just like I don’t have a choice on not using gmail, because that’s the email provider that the company I work for decided to go with.

          • @Nuxleio@lemmy.ml
            link
            fedilink
            English
            -32 days ago

            SMS works fine in any country.

            And you can isolate your business requirements from your personal life.

        • Echo Dot
          link
          fedilink
          English
          32 days ago

          I have 47 good reasons. There’s 47 good reasons are that those people in my contact list have WhatsApp and use it as their primary method of communicating.

            • Echo Dot
              link
              fedilink
              English
              32 days ago

              No it doesn’t. It’s slow, can’t send files, can’t send video or images, doesn’t have read receipts or away notifications. Why would I use an inferior tool?

              Why do you even care anyway?

              • @Nuxleio@lemmy.ml
                link
                fedilink
                English
                1
                edit-2
                2 days ago

                Meta directly opposes the collective interests and human rights of all working class people, so I think the better question is how come you don’t care.

                There are many good reasons to not use WhatsApp. You’ve already correctly identified 47 of them.

                • @alphabethunter@lemmy.world
                  link
                  fedilink
                  English
                  22 days ago

                  Hardly ever I come across a person more self centered and a bigger fan of virtue signaling as you. You ignored literally everything we said, and your alternative was just “sms”. Even to the point of saying that the other commenter should stop talking to their 47 friends and family members.

  • @Naevermix@lemmy.world
    link
    fedilink
    English
    8
    edit-2
    2 days ago

    They’re right. What happens to the workers when they’re no longer required? The horses faced a similar issue at the advent of the combustion engine. The solution? Considerably fewer horses.

  • moonlight
    link
    fedilink
    223 days ago

    Depends on what we mean by “AI”.

    Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

    LLMs and the like? Yeah I’m not sure how positive these are. I don’t think they’ve actually been all that impactful so far.

    Once we have true machine intelligence, then we have the potential for great improvements in daily life and society, but that entirely depends on how it will be used.

    It could be a bridge to post-scarcity, but under capitalism it’s much more likely it will erode the working class further and exacerbate inequality.

    • @MangoCats@feddit.it
      link
      fedilink
      English
      23 days ago

      Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

      Machine learning is just large automated optimization, something that was done for many decades before, but the hardware finally reached a power-point where the automated searches started out-performing more informed selective searches.

      The same way that AlphaZero got better at chess than Deep Blue - it just steam-rollered the problem with raw power.

    • Pennomi
      link
      fedilink
      English
      53 days ago

      As long as open source AI keeps up (it has so far) it’ll enable technocommunism as much as it enables rampant capitalism.

      • moonlight
        link
        fedilink
        63 days ago

        I considered this, and I think it depends mostly on ownership and means of production.

        Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.

        Basically, I don’t see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don’t think it will do much to get us there.

        • @MangoCats@feddit.it
          link
          fedilink
          English
          23 days ago

          It would still require a revolution.

          I would like to believe that we could have a gradual transition without the revolution being needed, but… present political developments make revolution seem more likely.

        • @FourWaveforms@lemm.ee
          link
          fedilink
          English
          33 days ago

          It will only help us get there in the hands of individuals and collectives. It will not get us there, and will be used to the opposite effect, in the hands of the 1%.

        • @MangoCats@feddit.it
          link
          fedilink
          English
          23 days ago

          or just propped up with something like UBI.

          That depends entirely on how much UBI is provided.

          I envision a “simple” taxation system with UBI + flat tax. You adjust the flat tax high enough to get the government services you need (infrastructure like roads, education, police/military, and UBI), and you adjust the UBI up enough to keep the wealthy from running away with the show.

          Marshall Brain envisioned an “open source” based property system that’s not far off from UBI: https://marshallbrain.com/manna

  • @Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    162 days ago

    I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone’s heads. Basic supply and demand says my skillset will become more valuable.

    Someone will need to clean up the ai slop. I’ve already had similar pistons where I was brought into clean up code bases that failed being outsourced.

    Ai is simply the next iteration. The problem is always the same business doesn’t know what they really want and need and have no ability to assess what has been delivered.

    • @lobut@lemmy.ca
      link
      fedilink
      English
      424 hours ago

      A complete random story but, I’m on the AI team at my company. However, I do infrastructure/application rather than the AI stuff. First off, I had to convince my company to move our data scientist to this team. They had him doing DevOps work (complete mismanagement of resources). Also, the work I was doing was SO unsatisfying with AI. We weren’t tweaking any models. We were just shoving shit to ChatGPT. Now it was be interesting if you’re doing RAG stuff maybe or other things. However, I was “crafting” my prompt and I could not give a shit less about writing a perfect prompt. I’m typically used to coding what I want but I had to find out how to write it properly: “please don’t format it like X”. Like I wasn’t using AI to write code, it was a service endpoint.

      During lunch with the AI team, they keep saying things like “we only have 10 years left at most”. I was like, “but if you have AI spit out this code, if something goes wrong … don’t you need us to look into it?” they were like, “yeah but what if it can tell you exactly what the code is doing”. I’m like, “but who’s going to understand what it’s saying …?” “no, it can explain the type of problem to anyone”.

      I said, I feel like I’m talking to a libertarian right now. Every response seems to be some solution that doesn’t exist.

    • @mctoasterson@reddthat.com
      link
      fedilink
      English
      52 days ago

      AI can look at a bajillion examples of code and spit out its own derivative impersonation of that code.

      AI isn’t good at doing a lot of other things software engineers actually do. It isn’t very good at attending meetings, gathering requirements, managing projects, writing documentation for highly-industry-specific products and features that have never existed before, working user tickets, etc.

    • @ImmersiveMatthew@sh.itjust.works
      link
      fedilink
      English
      12 days ago

      I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive.  AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.

      I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.

      • @Clent@lemmy.dbzer0.com
        link
        fedilink
        English
        61 day ago

        If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren’t it.

        They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.

        Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.

  • katy ✨
    link
    fedilink
    English
    42 days ago

    remember when tech companies did fun events with actual interesting things instead of spending three hours on some new stupid ai feature?