Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • @Akuchimoya@startrek.website
    link
    fedilink
    English
    26
    edit-2
    26 days ago

    I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be “information professionals”. If they, as a slightly better trained subset of the general public, don’t know that, the general public has no hope of knowing that.

    • @ricecooker@sh.itjust.works
      link
      fedilink
      English
      426 days ago

      People need to understand it’s a really well-trained parrot that has no idea what is saying. That’s why it can give you chicken recipes and software code; it’s seen it before. Then it uses statistics to put words together that usually appear together. It’s not thinking at all despite LLMs using words like “reasoning” or “thinking”

    • @WagyuSneakers@lemm.ee
      link
      fedilink
      English
      2327 days ago

      It’s so weird watching the masses ignore industry experts and jump on weird media hype trains. This must be how doctors felt in Covid.

      • @Llewellyn@lemm.ee
        link
        fedilink
        English
        426 days ago

        It’s so weird watching the masses ignore industry experts and jump on weird media hype trains.

        Is it though?

        • @WagyuSneakers@lemm.ee
          link
          fedilink
          English
          226 days ago

          I’m the expert in this situation and I’m getting tired explaining to Jr Engineers and laymen that it is a media hype train.

          I worked on ML projects before they got rebranded as AI. I get to sit in the room when these discussion happen with architects and actual leaders. This is Hype. Anyone who tells you other wise is lying or selling you something.

          • @BlushedPotatoPlayers@sopuli.xyz
            link
            fedilink
            English
            126 days ago

            I see how that is a hype train, and I also work with machine learning (though I’m far from an expert), but I’m not convinced these things are not getting intelligent. I know what their problems are, but I’m not sure whether the human brain works the same way, just (yet) more effective.

            That is, we have visual information, and some evolutionary BIOS, while LLMs have to read the whole internet and use a power plant to function - but what if our brains are just the same bullshit generators, we are just unaware of it?

            • @WagyuSneakers@lemm.ee
              link
              fedilink
              English
              126 days ago

              I work in an extremely related field and spend my days embedded into ML/AI projects. I’ve seen teams make some cool stuff and I’ve seen teams make crapware with “AI” slapped on top. I guarantee you that you are wrong.

              What if our brains…

              There’s the thing- you can go look this information up. You don’t have to guess. This information is readily available to you.

              LLMs work by agreeing with you and stringing together coherent text in patterns the recognize from huge samples. It’s not particularly impressive and is far, far closer to the initial chat bots from last century than they do real GAI or some sort of singularity. The limits we’re at now are physical. Look up how much electricity and water it takes just to do trivial queries. Progress has plateaued as it frequently does with tech like this. That’s okay, it’s still a neat development. The only big takeaway from LLMs is that agreeing with people makes them think you’re smart.

              In fact, LLMs are a glorified Google at higher levels of engineering. When most of the stuff you need to do doesn’t have a million stack overflow articles to train on it’s going to be difficult to get an LLM to contribute in any significant way. I’d go so far to say it hasn’t introduced any tool I didn’t already have. It’s just mildly more convenient than some of them while the costs are low.

    • @Arkouda@lemmy.ca
      link
      fedilink
      English
      226 days ago

      Librarians went to school to learn how to keep order in a library. That does not inherently make them have more information in their heads than the average person, especially regarding things that aren’t books and book organization.

      • @Akuchimoya@startrek.website
        link
        fedilink
        English
        026 days ago

        Librarians go to school to learn how to manage information, whether it is in book format or otherwise. (We tend to think of libraries as places with books because, for so much of human history, that’s how information was stored.)

        They are not supposed to have more information in their heads, they are supposed to know how to find (source) information, catalogue and categorize it, identify good information from bad information, good information sources from bad ones, and teach others how to do so as well.

  • JackFrostNCola
    link
    fedilink
    English
    3227 days ago

    "Half of LLM users " beleive this. Which is not to say that people who understand how flawed LLMs are, or what their actual function is, do not use LLMs and therefore arent i cluded in this statistic?
    This is kinda like saying ‘60% of people who pay for their daily horoscope beleive it is an accurate prediction’.

  • Th4tGuyII
    link
    fedilink
    3327 days ago

    LLMs are made to mimic how we speak, and some can even pass the Turing test, so I’m not surprised that people who don’t know better think of these LLMs as conscious in some way or another.

    It’s not a necessarily a fault on those people, it’s a fault on how LLMs are purposefully misadvertised to the masses

  • Encrypt-Keeper
    link
    fedilink
    English
    6
    edit-2
    27 days ago

    The funny thing about this scenario is by simply thinking that’s true, it actually becomes true.

  • @notsoshaihulud@lemmy.world
    link
    fedilink
    English
    3827 days ago

    I’m 100% certain that LLMs are smarter than half of Americans. What I’m not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.

  • I wouldn’t be surprised if that is true outside the US as well. People that actually (have to) work with the stuff usually quickly learn, that its only good at a few things, but if you just hear about it in the (pop-, non-techie-)media (including YT and such), you might be deceived into thinking Skynet is just a few years away.

    • Singletona082
      link
      fedilink
      English
      527 days ago

      It’s a one trick pony.

      That trick also happens to be a really neat trick that can make people think it’s a swiss army knife instead of a shovel.

  • Singletona082
    link
    fedilink
    English
    4027 days ago

    Am American.

    …this is not the flex that the article writer seems to think it is.

  • @blady_blah@lemmy.world
    link
    fedilink
    English
    1527 days ago

    You say this like this is wrong.

    Think of a question that you would ask an average person and then think of what the LLM would respond with. The vast majority of the time the llm would be more correct than most people.

    • @LifeInMultipleChoice@lemmy.dbzer0.com
      link
      fedilink
      English
      1727 days ago

      A good example is the post on here about tax brackets. Far more Republicans didn’t know how tax brackets worked than Democrats. But every mainstream language model would have gotten the answer right.

      • @blady_blah@lemmy.world
        link
        fedilink
        English
        026 days ago

        Then asking it a logic question. What question are you asking that the llms are getting wrong and your average person is getting right? How are you proving intelligence here?

        • @JacksonLamb@lemmy.world
          link
          fedilink
          English
          126 days ago

          LLMs are an autocorrect.

          Let’s use a standard definition like “intelligence is the ability to acquire, understand, and use knowledge.”

          It can acquire (learn) and use (access, output) data but it lacks the ability to understand it.

          This is why we have AI telling people to use glue on pizza or drink bleach.

          I suggest you sit down with an AI some time and put a few versions of the Trolley Problem to it. You will likely see what is missing.

          • @blady_blah@lemmy.world
            link
            fedilink
            English
            126 days ago

            I think this all has to do with how you are going to compare and pick a winner in intelligence. the traditional way is usually with questions which llms tend to do quite well at. they have the tendency to hallucinate, but the amount they hallucinate is less than the amount they don’t know in my experience.

            The issue is really all about how you measure intelligence. Is it a word problem? A knowledge problem? A logic problem?.. And then the issue is, can the average person get your question correct? A big part of my statement here is at the average person is not very capable of answering those types of questions.

            In this day and age of alternate facts and vaccine denial, science denial, and other ways that your average person may try to be intentionally stupid… I put my money on an llm winning the intelligence competition versus the average person. In most cases I think the llm would beat me in 90% of the topics.

            So, the question to you, is how do you create this competition? What are the questions you’re going to ask that the average person’s going to get right and the llm will get wrong?

            • @JacksonLamb@lemmy.world
              link
              fedilink
              English
              126 days ago

              I have already suggested it. Trolley problems etc. Ask it to tell you its reasoning. It always becomes preposterous sooner or later.

              My point here is that remembering the correct answer or performing a mathematical calculation are not a measure of understanding.

              What we are looking for that sets apart INTELLIGENCE is an ability to genuinely understand. LLMs don’t have that, any more than older autocorrects did.

            • @eletes@sh.itjust.works
              link
              fedilink
              English
              426 days ago

              literally just asked copilot through our work subscription

              I know it looks like I’m shitting on LLMs but really just trying to highlight they still have gaps on reasoning that they’ll probably fix in this decade.

          • @blady_blah@lemmy.world
            link
            fedilink
            English
            026 days ago

            I asked gemini and ChatGPT (the free one) and they both got it right. How many people do you think would get that right if you didn’t write it down in front of them? If Copilot gets it wrong, as per eletes’ post, then the AI success rate is 66%. Ask your average person walking down the street and I don’t think you would do any better. Plus there are a million questions that the LLMs would vastly out perform your average human.

            • @JacksonLamb@lemmy.world
              link
              fedilink
              English
              0
              edit-2
              26 days ago

              I think you might know some really stupid or perhaps just uneducated people. I would expect 100% of people to know how many Rs there are in “strawberry” without looking at it.

              Nevertheless, spelling is memory and memory is not intelligence.

  • @aceshigh@lemmy.world
    link
    fedilink
    English
    326 days ago

    Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.