• @dkc@lemmy.world
    link
    fedilink
    English
    538 days ago

    The research paper looks well written but I couldn’t find any information on if this paper is going to be published in a reputable journal and peer reviewed. I have little faith in private businesses who profit from AI providing an unbiased view of how AI works. I think the first question I’d like answered is did Anthropic’s marketing department review the paper and did they offer any corrections or feedback? We’ve all heard the stories about the tobacco industry paying for papers to be written about the benefits of smoking and refuting health concerns.

    • @StructuredPair@lemmy.world
      link
      fedilink
      English
      158 days ago

      A lot of ai research isn’t published in journals but either posted to a corporate website or put up on the arxiv. There are some ai journals, but the ai community doesn’t particularly value those journals (and threw a bit of a fit when they came out). This article is mostly marketing and doesn’t show anything that should surprise anyone familiar with how neural networks work generically in my opinion.

  • @cholesterol@lemmy.world
    link
    fedilink
    English
    388 days ago

    you can’t trust its explanations as to what it has just done.

    I might have had a lucky guess, but this was basically my assumption. You can’t ask LLMs how they work and get an answer coming from an internal understanding of themselves, because they have no ‘internal’ experience.

    Unless you make a scanner like the one in the study, non-verbal processing is as much of a black box to their ‘output voice’ as it is to us.

    • @cley_faye@lemmy.world
      link
      fedilink
      English
      48 days ago

      Anyone that used them for even a limited amount of time will tell you that the thing can give you a correct, detailed explanation on how to do a thing, and provide a broken result. And vice versa. Looking into it by asking more have zero chance of being useful.

  • @Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    809 days ago

    “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains."

    That is precisrly how I do math. Feel a little targeted that they called this odd.

    • @JayGray91@lemmy.zip
      link
      fedilink
      English
      298 days ago

      I think it’s odd in the sense that it’s supposed to be software so it should already know what 36 plus 59 is in a picosecond, instead of doing mental arithmetics like we do

      At least that’s my takeaway

      • @shawn1122@lemm.ee
        link
        fedilink
        English
        18
        edit-2
        8 days ago

        This is what the ARC-AGI test by Chollet has also revealed of current AI / LLMs. They have a tendency to approach problems with this trial and error method and can be extremely inefficient (in their current form) with anything involving abstract / deductive reasoning.

        Most LLMs do terribly at the test with the most recent breakthrough being with reasoning models. But even the reasoning models struggle.

        ARC-AGI is simple, but it demands a keen sense of perception and, in some sense, judgment. It consists of a series of incomplete grids that the test-taker must color in based on the rules they deduce from a few examples; one might, for instance, see a sequence of images and observe that a blue tile is always surrounded by orange tiles, then complete the next picture accordingly. It’s not so different from paint by numbers.

        The test has long seemed intractable to major AI companies. GPT-4, which OpenAI boasted in 2023 had “advanced reasoning capabilities,” didn’t do much better than the zero percent earned by its predecessor. A year later, GPT-4o, which the start-up marketed as displaying “text, reasoning, and coding intelligence,” achieved only 5 percent. Gemini 1.5 and Claude 3.7, flagship models from Google and Anthropic, achieved 5 and 14 percent, respectively.

        https://archive.is/7PL2a

        • @Goretantath@lemm.ee
          link
          fedilink
          English
          38 days ago

          Its funny because i approach life with a trial and error method too, not efficient but i get the job done in the end. Always see others who dont and give up like all the people bad at computers who ask the tech support at the company to fix the problem instead of thinking about it for two secs and wonder where life went wrong.

    • Echo Dot
      link
      fedilink
      English
      5
      edit-2
      8 days ago

      But you’re doing two calculations now, an approximate one and another one on the last digits, since you’re going to do the approximate calculation you might act as well just do the accurate calculation and be done in one step.

      This solution, while it works, has the feeling of evolution. No intelligent design, which I suppose makes sense considering the AI did essentially evolve.

      • @sapetoku@sh.itjust.works
        link
        fedilink
        English
        88 days ago

        A regular AI should use a calculator subroutine, not try to discover basic math every time it’s asked something.

      • @Imgonnatrythis@sh.itjust.works
        link
        fedilink
        English
        -728 days ago

        Fascist. If someone does maths differently than your preference, it’s not “weird shit”. I’m facile with mental math despite what’s perhaps a non-standard approach, and it’s quite functional to be able to perform simple to moderate levels of mathematics mentally without relying on a calculator.

          • @Imgonnatrythis@sh.itjust.works
            link
            fedilink
            English
            -48 days ago

            Thought police mate. You don’t tell people the way they think is weird shit just because they think differently than you. Break free from that path.

            • Lemminary
              link
              fedilink
              English
              4
              edit-2
              8 days ago

              The reply was literally “*I* use a calculator” followed by “AI should use one too”. Are you suggesting that you’re an LLM or how did you cut a piece of cloth for yourself out of that?

              • @GSV_Sleeper_Service@lemmy.world
                link
                fedilink
                English
                17 days ago

                Calling someone a fascist for that is obviously a bit OTT but you’ve ignored the “do weird shit” part of the response so it wasn’t literally what you said. Taking the full response into account you can easily interpret it as “I don’t bother with mental maths but use a calculator instead, anyone who isn’t like me is weird as shit”

                That is a bit thought police-y

                • @ClamDrinker@lemmy.world
                  link
                  fedilink
                  English
                  17 days ago

                  Except as you demonstrated, it requires quite a few leaps of interpretation, assuming the worst interpretations of OP’s statement, which is why it’s silly. OP clearly limited their statement to themselves and AI.

                  Now if OP said, “everyone should use a calculator or die”, maybe then it would have been a valid response.

                • Lemminary
                  link
                  fedilink
                  English
                  2
                  edit-2
                  7 days ago

                  I didn’t ignore it, I just interpret it differently as in, “I don’t need to do this unusual stuff everyone does without a calculator”. Calling something weird doesn’t necessarily mean it’s off-color or that it’s a trait the other person has. In my use case, weird just means unexpected or counterintuitive, and maybe complex enough that I can’t bother with describing it properly. I know because I use it that way too. Weird doesn’t have to mean a third eye on your face every time. I mean, doing the weird math thing is taught in school as a strategy.

                  I do want to mention that it’s not the first time I see a visceral reaction to a passing comment. I usually see this from marginalized groups, and I can assure you, both Kolanki and I are part of those too. And knowing his long comment history, I sincerely doubt he meant anyone is weird as shit.

                  And even if it’s a bit thought-policey, how does that warrant calling someone a fascist and going off on them like that? That’s also a bit weird (as in odd).

        • @artichoke99@lemm.ee
          link
          fedilink
          English
          158 days ago

          OK but the llm is evidently shit at math so its “non-standard” approach should still be adjusted

        • I am talking about the AI. It’s already a computer. It shouldn’t need to do anything other than calculate the equations. It doesn’t have a brain, it doesn’t think like a human, so it shouldn’t need any special tools or ways to help it do math. It is a calculator, after all.

      • @Goretantath@lemm.ee
        link
        fedilink
        English
        18 days ago

        Yes, you shove it off onto another to do for you instead of doing it yourself and the ai doesnt.

    • @cm0002@lemmy.worldOP
      link
      fedilink
      English
      79 days ago

      I think this comm is more suited for news articles talking about it, though I did post that link to !ai_@lemmy.world which I think would be a more suited comm for those who want to go more in-depth on it

  • Captain Poofter
    link
    fedilink
    English
    409 days ago

    this is one of the most interesting things about Llms that i have ever read

    • @cm0002@lemmy.worldOP
      link
      fedilink
      English
      169 days ago

      That bit about how it turns out they aren’t actually just predicting the next word is crazy and kinda blows the whole “It’s just a fancy text auto-complete” argument out of the water IMO

      • @pelespirit@sh.itjust.works
        link
        fedilink
        English
        49 days ago

        I read an article that it can “think” in small chunks. They don’t know how much though. This was also months ago, it’s probably expanded by now.

        • Captain Poofter
          link
          fedilink
          English
          6
          edit-2
          9 days ago

          anything that claims it “thinks” in any way I immediately dismiss as an advertisement of some sort. these models are doing very interesting things, but it is in no way “thinking” as a sentient mind does.

          • @stephen01king@lemmy.zip
            link
            fedilink
            English
            -38 days ago

            Anybody who claims they don’t “think” before we even figure out completely how they work and even how human thoughts work are just spreading anti-AI sentiment beyond what is considered logical.

            You should become a better example than an AI by only arguing based on facts rather than things you hallucinate if you want to prove your own position on this matter.

            • @TimewornTraveler@lemm.ee
              link
              fedilink
              English
              -17 days ago

              shouldn’t you say the inverse is true lol why call it thinking if we don’t know what thinking is or what it’s doing?

              why are you cool with pro ai and against anti ai sentiments? either way it’s a value judgment, quit acting like yours is the correct opinion

              • @stephen01king@lemmy.zip
                link
                fedilink
                English
                17 days ago

                I wasn’t calling it thinking. I’m saying people claiming it’s not is just jumping the gun. It’s also funny you’re simply claiming I am pro AI without needing any proof. This is what I meant when I said people who are anti-AI should strive to be better than the AI they criticise. Acting based on non-facts makes you no better than AI with their hallucinations.

                Its also funny that you’re calling me out when I’m just mirroring what the other guy is doing to make a point. He’s acting like his is the correct opinion, yet you only calling me out because the guy is on your side of the argument. That’s simply a bad faith argument on your part.

                • @TimewornTraveler@lemm.ee
                  link
                  fedilink
                  English
                  17 days ago

                  I see the misunderstanding, sorry. You’re still in the wrong though. while you weren’t calling it thinking, the article certainly was. THAT’S why we’re saying it’s not. we’re doing what you said we should, but it’s the inverse, and you call it anti-AI. the jackass who wrote that article is jumping the gun and we’re saying “how tf can you call it thinking” and i see your reply calling that anti AI, seems like a reasonable mistake ye?

          • @pelespirit@sh.itjust.works
            link
            fedilink
            English
            29 days ago

            I wish I could find the article. It was researchers and they were freaked out just as much as anyone else. It’s like slightly over chance that it “thought,” not some huge revolutionary leap.

            • Captain Poofter
              link
              fedilink
              English
              59 days ago

              there has been a flooding of these articles. everyone wants to sell their llm as “the smartest one closest to a real human” even though the entire concept of calling them AI is a marketing misnomer

          • @LarmyOfLone@lemm.ee
            link
            fedilink
            English
            -28 days ago

            You know they don’t think - even though “It’s a peculiar truth that we don’t understand how large language models (LLMs) actually work.”?

            It’s truly shocking to read this from a mess of connected neurons and synapses like yourself. You’re simply doing fancy word prediction of the next word /s

      • @Voroxpete@sh.itjust.works
        link
        fedilink
        English
        399 days ago

        It really doesn’t. You’re just describing the “fancy” part of “fancy autocomplete.” No one was ever really suggesting that they only predict the next word. If that was the case they would just be autocomplete, nothing fancy about it.

        What’s being conveyed by “fancy autocomplete” is that these models ultimately operate by combining the most statistically likely elements of their dataset, with some application of random noise. More noise creates more “creative” (meaning more random, less probable) outputs. They do not actually “think” as we understand thought. This can clearly be seen in the examples given in the article, especially to do with math. The model is throwing together elements that are statistically proximate to the prompt. It’s not actually applying a structured, logical method the way humans can be taught to.

        • @FourWaveforms@lemm.ee
          link
          fedilink
          English
          179 days ago

          Unfortunately, these articles are often written by people who don’t know enough to realize they’re missing important nuances.

          • @datalowe@lemmy.world
            link
            fedilink
            English
            98 days ago

            It also doesn’t help that the AI companies deliberately use language to make their models seem more human-like and cogent. Saying that the model e.g. “thinks” in “conceptual spaces” is misleading imo. It abuses our innate tendency to anthropomorphize, which I guess is very fitting for a company with that name.

            On this point I can highly recommend this open access and even language-wise accessible article: https://link.springer.com/article/10.1007/s10676-024-09775-5 (the authors also appear on an episode of the Better Offline podcast)

            • @FourWaveforms@lemm.ee
              link
              fedilink
              English
              08 days ago

              I can’t contemplate whether LLMs think until someone tells me what it means to think. It’s too easy to rely on understanding the meaning of that word only through its typical use with other words.

        • @reev@sh.itjust.works
          link
          fedilink
          English
          39 days ago

          Genuine question regarding the rhyme thing, it can be argued that “predicting backwards isn’t very different” but you can’t attribute generating the rhyme first to noise, right? So how does it “know” (for lack of a better word) to generate the rhyme first?

          • @dustyData@lemmy.world
            link
            fedilink
            English
            159 days ago

            It already knows which words are, statistically, more commonly rhymed with each other. From the massive list of training poems. This is what the massive data sets are for. One of the interesting things is that it’s not predicting backwards, exactly. It’s actually mathematically converging on the response text to the prompt, all the words at the same time.

            • Semperverus
              link
              fedilink
              English
              -5
              edit-2
              9 days ago

              Which is exactly how we do it. Ours is just a little more robust.

              • @ThisIsNotHim@sopuli.xyz
                link
                fedilink
                English
                28 days ago

                We also check to see if the word that popped into our heads actually rhymes by saying it out loud. Actual validation steps we can take is a bigger difference than being a little more robust.

                We also have non-list based methods like breaking the word down into smaller chunks to try to build up hopefully more novel rhymes. I imagine professionals have even more tools, given the complexity of more modern rhyme schemes.

        • @aesthelete@lemmy.world
          link
          fedilink
          English
          2
          edit-2
          8 days ago

          People are generally shit at understanding probabilities and even when they have a fairly strong math background tend to explain probablistic outcomes through anthropomorphism rather than doing the more difficult and “think-painy” statistical analysis that would be required to know if there was anything more to it.

          I myself start to have thoughts that balatro is purposefully screwing me over or feeding me outcomes when it’s just randomness and probability as stated.

          Ultimately, it’s easier (and more fun) for us to think that way and it largely serves us better in everyday life.

          But these things are entire casinos’ worth of probability and statistics in and of themselves, and the people developing them want desperately to believe that they are something more than pseudorandom probabilistic fancy autocomplete engines.

          A lot of the folks at the forefront of this have paychecks on the line. Add the difficulty of getting someone to understand how something works when their salary depends on them not understanding it to the existing inability of humans to reason probabilistically and the AGI from LLM delusion becomes near impossible to shake for some folks.

          I wouldn’t be surprised if this AI hype bubble yields a cult in the end.

      • @Shanmugha@lemmy.world
        link
        fedilink
        English
        39 days ago

        It doesn’t, who the hell cares if someone allowed it to break “predict whole text” into "predict part by part, and then “with rhyme, we start at the end”. Sounds like a naive (not as in “simplistic”, but as “most straightforward”) way to code this, so given the task to write an automatic poetry producer, I would start with something similar. The whole thing still stands as fancy auto-complete

          • @Shanmugha@lemmy.world
            link
            fedilink
            English
            18 days ago

            Redditor as “a person active on Reddit”? I don’t see where I was talking about humans. Or am I misunderstanding the question?

              • @Shanmugha@lemmy.world
                link
                fedilink
                English
                18 days ago

                Sounds scary. I read a story the other day about a dude who really got himself a discord server with chatbots, and that was his main place of “communicating” and “socializing”

                • @aesthelete@lemmy.world
                  link
                  fedilink
                  English
                  1
                  edit-2
                  8 days ago

                  This anecdote has the makings of a “men will literally x instead of going to therapy” joke.

                  On a more serious note though, I really wish people would stop anthropomorphisizing these things, especially when they do it while dehumanizing people and devaluing humanity as a whole.

                  But that’s unlikely to happen. It’s the same type of people that thought the mind was a machine in the first industrial revolution, and then a CPU in the third…now they think it’s an LLM.

                  LLMs could have some better (if narrower) applications if we could stop being so stupid as to inject them into places where they are obviously counterproductive.

      • @Carrolade@lemmy.world
        link
        fedilink
        English
        249 days ago

        Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It’s still predicting parts of the passage based solely on other parts of the passage.

        Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

        • @Womble@lemmy.world
          link
          fedilink
          English
          39 days ago

          Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

          Interesting that…

          Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”

          • @Carrolade@lemmy.world
            link
            fedilink
            English
            69 days ago

            Yeah I caught that too, I’d be curious to know more about what specifically they meant by that.

            Being able to link all of the words that have a similar meaning, say, nearby, close, adjacent, proximal, side-by-side, etc and realize they all share something in common could be done in many ways. Some would require an abstract understanding of what spatial distance actually is, an understanding of physical reality. Others would not, one could simply make use of word adjacency, noticing that all of these words are frequently used alongside certain other words. This would not be abstract, it’d be more of a simple sum of clear correlations. You could call this mathematical framework a universal language if you wanted.

            Ultimately, a person learns meaning and then applies language to it. When I’m a baby I see my mother, and know my mother is something that exists. Then I learn the word “mother” and apply it to her. The abstract comes first. Can an LLM do something similar despite having never seen anything that isn’t a word or number?

            • @Womble@lemmy.world
              link
              fedilink
              English
              69 days ago

              I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.

          • @MTK@lemmy.world
            link
            fedilink
            English
            39 days ago

            Yeah but I think this is still the same, just not a single language. It might think in some mix of languages (which you can actuaysee sometimes if you push certain LLMs to their limit and they start producing mixed language responses.)

            But it still has limitations because of the structure in language. This is actually a thing that humans have as well, the limiting of abstract thought through internal monologue thinking

            • @Womble@lemmy.world
              link
              fedilink
              English
              39 days ago

              Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.

          • @TimewornTraveler@lemm.ee
            link
            fedilink
            English
            -27 days ago

            wow an AI researcher over hyping his own product. he’s just waxing poetic .

            we don’t even have a good sense of what thought IS, please tell Claude to call the philosophers because apparently he’s figured out consciousness

      • @LarmyOfLone@lemm.ee
        link
        fedilink
        English
        -88 days ago

        I mean it implies that they CAN start with the conclusion or the “thought” and then generate the text to verbalize that.

        It’s shocking to what length humans will go to explain how their wetware neural network is fundamentally different and it’s impossible for LLMs to think or reason in any way. Honestly LLMs teach us more about human intelligence (or the lack thereof) than machine intelligence. Like obi wan said, “The ability to speak does not make one intelligent” haha.

  • moonlight
    link
    fedilink
    49 days ago

    The math example in particular is very interesting, and makes me wonder if we could splice a calculator into the model, basically doing “brain surgery” to short circuit the learned arithmetic process and replace it.

    • Nougat
      link
      fedilink
      39 days ago

      That math process for adding the two numbers - there’s nothing wrong with it at all. Estimate the total and come up with a range. Determine exactly what the last digit is. In the example, there’s only one number in the range with 5 as the last digit. That must be the answer. Hell, I might even use that same method in my own head.

      The poetry example, people use that one often enough, too. Come up with a couple of words you would have fun rhyming, and build the lines around those words. Nothing wrong with that, either.

      These two processes are closer to “thought” than I previously imagined.

      • moonlight
        link
        fedilink
        99 days ago

        Well, it falls apart pretty easily. LLMs are notoriously bad at math. And even if it was accurate consistently, it’s not exactly efficient, when a calculator from the 80s can do the same thing.

        We have setups where LLMs can call external functions, but I think it would be cool and useful to be able to replace certain internal processes.

        As a side note though, while I don’t think that it’s a “true” thought process, I do think there’s a lot of similarity with LLMs and the human subconscious. A lot of LLM behaviour reminds me of split brain patients.

        And as for the math aspect, it does seem like it does math very similarly to us. Studies show that we think of small numbers as discrete quantities, but big numbers in terms of relative size, which seems like exactly what this model is doing.

        I just don’t think it’s a particularly good way of doing mental math. Natural intuition in humans and gradient descent in LLMs both seem to create layered heuristics that can become pretty much arbitrarily complex, but it still makes more sense to follow an exact algorithm for some things.

        • dual_sport_dork 🐧🗡️
          link
          fedilink
          English
          69 days ago

          when a calculator from the 80s can do the same thing.

          1970’s! The little blighters are even older than most people think.

          Which is why I find it extra hilarious / extra infuriating that we’ve gone through all of these contortions and huge wastes of computing power and electricity to ultimately just make a computer worse at math.

          Math is the one thing that computers are inherently good at. It’s what they’re for. Trying to use LLM’s to perform it halfassedly is a completely braindead endeavor.

          • @Jakeroxs@sh.itjust.works
            link
            fedilink
            English
            18 days ago

            But who is going around asking these bots to specifically do math? Like in normal usage, Ive never once done that because I could just use a calculator or spreadsheet software if I need to get fancy lol

    • @Not_mikey@slrpnk.net
      link
      fedilink
      English
      39 days ago

      I think a lot of services are doing this behind the scenes already. Otherwise chatgpt would be getting basic arithmetic wrong a lot more considering the methods the article has shown it’s using.

    • SharkAttak
      link
      fedilink
      19 days ago

      Do you mean like us, using an external calculator instead of doing it in our brain?

  • Pennomi
    link
    fedilink
    English
    79 days ago

    This is great stuff. If we can properly understand these “flows” of intelligence, we might be able to write optimized shortcuts for them, vastly improving performance.

    • @LarmyOfLone@lemm.ee
      link
      fedilink
      English
      18 days ago

      Better yet, teach AI to write code replacing specific optimized AI networks. Then automatically profile and optimize and unit test!

    • @MTK@lemmy.world
      link
      fedilink
      English
      99 days ago

      That has always been the case. Even basic programs need debugging sometimes, so we developed debuggers.

      • @LarmyOfLone@lemm.ee
        link
        fedilink
        English
        -3
        edit-2
        8 days ago

        Not really. When you program you break down the problem into many smaller sub programs and then codify them. There are errors that need debugging. But never “how does this part of the program I wrote work?”. Reading code from someone else is less fun than writing, but you can still understand it.

        There are some cases like detergents, apparently until recently we didn’t know exactly how it works. But human engineered tools are not comparable to this.

  • @Technoworcester@lemm.ee
    link
    fedilink
    English
    1508 days ago

    'is weirder than you thought ’

    I am as likely to click a link with that line as much as if it had

    ‘this one weird trick’ or ‘side hussle’.

    I would really like it if headlines treated us like adults and got rid of click baity lines.

    • @BackgrndNoize@lemmy.world
      link
      fedilink
      English
      418 days ago

      But then you wouldn’t need to click on thir Ad infested shite website where 1-2 paragraphs worth of actual information is stretched into a giant essay so that they can show you more Ads the longer you scroll

      • Tony Wu
        link
        fedilink
        English
        48 days ago

        It really is quite unfortunate, I wish titles do what titles are supposed to do instead of being baits.but you are right, even consciously trying to avoid clicking sometimes curiosity gets the best of me. But I am improving.

      • @EpeeGnome@lemm.ee
        link
        fedilink
        English
        38 days ago

        Well, I’m doing my part against them by refusing to click on any bait headlines, but I fear it’s a lost cause anyway.

        • @BeardedGingerWonder@feddit.uk
          link
          fedilink
          English
          58 days ago

          I try and just ignore it and read what I’m interested in regardless. From what I hear about the YouTube algo, for instance, clickbait titles are necessity more than a choice for YouTubers, if they don’t use them they get next to no engagement early and the algo buries that video which can impact the channel in general.

  • @perestroika@lemm.ee
    link
    fedilink
    English
    10
    edit-2
    8 days ago

    Wow, interesting. :)

    Not unexpectedly, the LLM failed to explain its own thought process correctly.

    • @shneancy@lemmy.world
      link
      fedilink
      English
      47 days ago

      tbf, how do you know what to say and when? or what 2+2 is?

      you learnt it? well so did AI

      i’m not an AI nut or anything, but we can barely comprehend our own internal processes, it’d be concerning if a thing humanity created was better at it than us lol

      • El Barto
        link
        fedilink
        English
        17 days ago

        You’re comparing two different things.

        Of course I can reflect on how I came with a math result.

        “Wait, how did you come up with 4 when I asked you 2+2?”

        You can confidently say: “well, my teacher said it once and I’m just parroting it.” Or “I pictured two fingers in my mind, then pictured two more fingers and then I counted them.” Or “I actually thought that I’d say some random number, came up with 4 because it’s my favorite digit, said it and it was pure coincidence that it was correct!”

        Whereas it doesn’t seem like Claude can’t do this.

        Of course, you could ask me “what’s the physical/chemical process your neurons follow for you to form those four fingers you picture in your mind?” And I would tell you I don’t know. But again, that’s a different thing.

        • @shneancy@lemmy.world
          link
          fedilink
          English
          27 days ago

          yeah i was referring more to the chemical reactions. the 2+2 example is not the best one but langauge itself is a great case study. once you get fluent enough at any langauge everything just flows, you have a thought and then you compose words to describe it, and the reverse is true, you hear something and your brain just understands. How do we do any of that? no idea

          • El Barto
            link
            fedilink
            English
            26 days ago

            Understood. And yeah, language is definitely an interesting topic. “Why do you say ‘So be it’ instead of ‘So is it’?” Most people will say “I don’t know… all I know if that it sounds correct.” Someone will say “it’s because it’s a preterite preposition past imperfect incantation tense used with an composition participle around-the-clock flush adverb, so clearly you must use the subjunctive in this case.” But that’s after studying it years later.