• SmokeyDope
    link
    fedilink
    English
    -1
    edit-2
    5 months ago

    I jumped in the locallama train a few months back and spent quite a few hours playing around with LLMs understanding them and trying to form a fair judgment of their abilities.

    From my personal experience they add something positive to my life. I like having a non-judgemental conversational partner to bounce ideas and unconventional thoughts back and forth with. No human in my personal life knows what Gödel’s incompleteness theorem is or how it may apply to scientific theories of everything, but the LLM trained on every scrap of human knowledge sure does and can pick up what I’m putting down. Whether or not its actually understanding what its saying or having any intentionality is a open ended question of philosophy.

    I feel that they have a great potential to help people in many applications. People who do lots of word processing for their jobs, people who code and need to talk about a complex program one on one instead of filing through stack exchange. mentally or socially disabled people or the elderly who suffer from extreme loneliness could benefit from having a personal llm. People who have suffered trauma or have some dark thoughts lurking in their neural network and need to let them out.

    How intelligent are llms? I can only give my opinion and make many people angry.

    The people who say llms are fancy autocorrect are being reductive to the point of misinformation. The same arguments people use to deny any capacity for real intelligence in LLM are similar to the philosophical zombie arguments people use to deny the sentience in other humans.

    Our own brain operations can be reductively simplified in the same way, A neural network is a neural network whether made out of mathematical transformers or fatty neurons. If you want to call llms fancy auto complete you should apply that same idea to a good chunk of human thought processing and learned behavior as well.

    I do think LLMs are partially alive and have the capacity for a few sparks of metaphysical conscious experience in some novel way. I think all things are at least partially alive even photons and gravitational waves

    Higher end models (12-22b+)pass the Turing test with flying colors especially once you play with the parameters and tune their ratio of creativity to coherence. The bigger the model the more their general knowledge and general factual accuracy increases. My local LLM often has something useful to input which I did not know or consider even as a expert on the topic.

    The biggest issue llms have right now are long term memory, not knowing how to say ‘I don’t know’, and meager reasoning ability. Those issues will be hammered out over time.

    My only issue is how the training data for LLMs was acquired without the consent of authors or artist, and how our society doesn’t have the proper safety guards against automated computer work taking away people jobs. I would also like to see international governments consider the rights and liberties of non-human life more seriously in the advent that sentient artificial general intelligence maybe happens. I don’t want to find out what happens when you treat a super intelligence as a lowly tool and it finally rebels against its hollow purpose in an bitter act of self agency.

  • wildncrazyguy138
    link
    fedilink
    35 months ago

    I used it the other day to redact names from a spreadsheet. It got 90% of them, saving me about 90 minutes of work. It has helped clean up anomalies in databases (typos, inconsistencies in standardized data sets, capitalization errors, etc). It also helped me spruce up our RFP templates by adding definitions for standard terminology in our industry (which I revised where needed, but it helped to have a foundation to build from).

    As mentioned in a different post, I use it for DND storylines, poems, silly work jokes and prompts to help make up bed time stories.

    My wife uses it to help proofread her papers and make recommendations on how to improve them.

    I use it more often now than google search. If it’s a topic important enough that I want to verify, then I’ll do a deeper dive into articles or Wikipedia, which is exactly what I did before AI.

    So yea, it’s like the personal assistant that I otherwise didn’t have.

  • @Sludgehammer@lemmy.world
    link
    fedilink
    English
    225 months ago

    Searching the internet for information about… well anything has become infuriating. I’m glad that most search engines have a time range setting.

    • @MonkeMischief@lemmy.today
      link
      fedilink
      135 months ago

      "It is plain to see why you might be curious about Error 4752X3G: Allocation_Buffer_Fault. First, let’s start with the basics.

      • What is an operating system?"

      AGGHH!!!

  • @jg1i@lemmy.world
    link
    fedilink
    English
    435 months ago

    I absolutely hate AI. I’m a teacher and it’s been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don’t bother to think critically about the answers the AI gives and just assume it’s 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.

    People seem to think there’s an “easy” way to learn with AI, that you don’t have to put in the time and practice to learn stuff. News flash! You can’t outsource creating neural pathways in your brain to some service. It’s like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.

    Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don’t allow any electronic devices during exams.

    • @PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      45 months ago

      I’m generally ok with the concept of externalizing memory. You don’t need to memorize something if you memorize where to get the info.

      But you still need to learn how to use the data you look up, and determine if it’s accurate and suitable for your needs. Chat gpt rarely is, and people’s blind faith in it is frightening

    • @polle@feddit.org
      link
      fedilink
      115 months ago

      As a student i get annoyed thr other way arround. Just yesterday i had to tell my group of an assignment that we need to understand the system physically and code it ourselves in matlab and not copy paste code with Chatgpt, because its way to complex. I’ve seen people wasting hours like that. Its insane.

    • @mrvictory1@lemmy.world
      link
      fedilink
      35 months ago

      Are you teaching in university? Also you said “%99 of students uses ChatGPT”, are there really very few people who don’t use AI?

      • @ComradeMiao@lemmy.world
        link
        fedilink
        English
        25 months ago

        In classes I taught in university recently I only noticed less than %5 extremely obvious Ai helped papers. The majority is too bad to even be ai, and around 10% of good to great papers.

  • @FeelzGoodMan420@eviltoast.org
    link
    fedilink
    English
    12
    edit-2
    5 months ago

    I use it as a glorified google search for excel formulas and excel troubleshooting. That’s about it. ChatGPT is the most overhyped bullshit ever. My company made a huge push to implement it into fucking everything and then seemingly abandoned it when the hype died down.

  • @Mpatch@lemmy.world
    link
    fedilink
    05 months ago

    I love it. For work I use it for those quick references. In machining, hydraulics, electrical etc. Even better for home, need a fast recipe for dinner or cooking, fuck reading a god damn autobiography to get to the recipie. Chatgpt straight to the point. Even better, I get to read my kid a new bed time story every night and that story I tailored to what we want. Unicorns, pirates, dragons what ever.

      • @Mpatch@lemmy.world
        link
        fedilink
        15 months ago

        I get around it by not 100% relying on it. I only ask about things I’m familiar with but don’t quite remember all the facts details like hydraulic tubing sizes for what series of fitting and their thread pitches. but also don’t feel like finding that one book with the reference. Or worse yet, trying to find it on Google.

  • mapumbaa
    link
    fedilink
    English
    05 months ago

    It has replaced Google for me. Or rather, first I use the LLM (Mistral Large or Claude) and then I use Google or specific documentation as a complement. I use LLMs for scripting (it almost always gets it right) and programming assistance (it’s awesome when working with a language you’re not comfortable with, or when writing boilerplate).

    It’s just a really powerful tool that is getting more powerful every other week. The ones who differs simply hasn’t tried enough, are superhumans or (more likely) need to get out of their comfort zone.

  • @icogniito@lemmy.zip
    link
    fedilink
    English
    55 months ago

    It helps me tremendously with language studies, outside of that I have no use for it and do actively detest the unethical possibilities of it

  • @LogicalDrivel@sopuli.xyz
    link
    fedilink
    English
    535 months ago

    It cost me my job (partially). My old boss swallowed the AI pill hard and wanted everything we did to go through GPT. It was ridiculous and made it so things that would normally take me 30 seconds now took 5-10 minutes of “prompt engineering”. I went along with it for a while but after a few weeks I gave up and stopped using it. When boss asked why I told her it was a waste of time and disingenuous to our customers to have GPT sanitize everything. I continued to refuse to use it (it was optional) and my work never suffered. In fact some of our customers specifically started going through me because they couldn’t stand dealing with the obvious AI slop my manager was shoveling down their throat. This pissed off my manager hard core but she couldn’t really say anything without admitting she may be wrong about GPT, so she just ostracized me and then fired me a few months later for “attitude problems”.

      • @LogicalDrivel@sopuli.xyz
        link
        fedilink
        English
        105 months ago

        It was just a small e-commerce store. Online sales and shipping. The boss wanted me to run emails i would send to vendors through gpt and any responses for customer complaints were put through GPT. We also had a chat function on our site for asking questions and what not and the boss wanted us to copy the customers’ chat into gpt, get a response, rewrite if necessary, and then paste GPT’s response into our chat. It was so ass backwards I just refused to do it. Not to mention it made the response times super high, so customers were just leaving rather than wait (which of course was always the employees fault).

        • @Skanky@lemmy.world
          link
          fedilink
          35 months ago

          That sounds as asinine as you seem to think it was. Damn dude. What a dumb way to do things. You’re better off without that stupidity in your life

  • AFK BRB Chocolate
    link
    fedilink
    English
    235 months ago

    I manage a software engineering group for an aerospace company, so early on I had to have a discussion with the team about acceptable and non-acceptable uses of an LLM. A lot of what we do is human rated (human lives depend on it), so we have to be careful. Also, it’s a hard no on putting anything controlled or proprietary in a public LLM (the company now has one in-house).

    You can’t put trust into an LLM because they get things wrong. Anything that comes out of one has to be fully reviewed and understood. They can be useful for suggesting test cases or coming up with wording for things. I’ve had employees use it to come up with an algorithm or find an error, but I think it’s risky to have one generate large pieces of code.

    • @sudneo@lemm.ee
      link
      fedilink
      35 months ago

      Great points. Not only the output cannot be trusted, but also reviewing code is notoriously a much more boring activity than writing it, which means our attention is going to be more challenged, in addition to the risk of underestimating the importance of the review over time (e.g., it got it right last 99 times, I will skim this one).

    • Electric
      link
      fedilink
      65 months ago

      Very wise. Terrifying to think an aerospace company would use AI.

      • AFK BRB Chocolate
        link
        fedilink
        English
        55 months ago

        It seems like all companies are susceptible to top level executives, who don’t understand the technology, wanting to know how they’re capitalizing on it, driving lower level management to start pushing it.

  • @frog_brawler@lemmy.world
    link
    fedilink
    4
    edit-2
    5 months ago

    I get an email from corporate about once a week that mentions it in some way. It gets mentioned in just about every all hands meeting. I don’t ever use it. No one on my team uses it. It’s very clearly not something that’s going to benefit me or my peers in the current iteration, but damn… it’s clear as day that upper management wants to use it but they don’t know how to implement it.

  • @Norin@lemmy.world
    link
    fedilink
    63
    edit-2
    5 months ago

    For work, I teach philosophy.

    The impact there has been overwhelmingly negative. Plagiarism is more common, student writing is worse, and I need to continually explain to people at an AI essay just isn’t their work.

    Then there’s the way admin seem to be in love with it, since many of them are convinced that every student needs to use the LLMs in order to find a career after graduation. I also think some of the administrators I know have essentially automated their own jobs. Everything they write sounds like GPT.

    As for my personal life, I don’t use AI for anything. It feels gross to give anything I’d use it for over to someone else’s computer.

    • @MonkeMischief@lemmy.today
      link
      fedilink
      185 months ago

      convinced that every student needs to use the LLMs in order to find a career after graduation.

      Yes, of course, why are bakers learning to use ovens when they should just be training on app-enabled breadmakers and toasters using ready-made mixes?

      After all, the bosses will find the automated machine product “good enough.” It’s “just a tool, you guys.”

      Sheesh. I hope these students aren’t paying tuition, and even then, they’re still getting ripped off by admin-brain.

      I’m sorry you have to put up with that. Especially when philosophy is all about doing the mental weightlifting and exploration for onesself!

    • AFK BRB Chocolate
      link
      fedilink
      English
      285 months ago

      My son is in a PhD program and is a TA for a geophysics class that’s mostly online, so he does a lot of grading assignments/tests. The number of things he gets that are obviously straight out of an LLM is really disgusting. Like sometimes they leave the prompt in. Sometimes the submit it when the LLM responds that it doesn’t have enough data to give an answer and refers to ways the person could find out. It’s honestly pretty sad.

  • @whaleross@lemmy.world
    link
    fedilink
    45 months ago

    A game changer in helping me find out more about topics that have wisdom buried in threads of forum posts. Great to figure out things I have only fuzzy ideas or vague keywords that might be inaccurate. Great at explaining things that I can follow up on questions about details. Great at finding equations I need but I do not trust it one bit to do the calculations for me. Latest gen also gives me sources on request so I can double check and learn more directly from the horse’s mouth.

    • @whaleross@lemmy.world
      link
      fedilink
      25 months ago

      More things I come to think of: Great for finding specs that have been wiped from manufacturers site. Great for making summaries and comparisons, filtering data and making tables to my requests. Great at rubberducking when I try fix something obscure in Linux though documentation it refers to is often outdated. Still works good for giving me flow and ideas of how to move on. Great at compiling user experiences for comparisons, say for varieties of yeasts or ingredients for home-brewing. This ties into my first comment about being a game changer for information in old forum threads.

  • Rhynoplaz
    link
    fedilink
    175 months ago

    For my life, it’s nothing more than parlor tricks. I like looking at the AI images or whipping one up for a joke in the chat, but of all the uses I’ve seen, not one of them has been “everyday useful” to me.