• @schnurrito@discuss.tchncs.de
    link
    fedilink
    English
    8011 months ago

    Messages that people post on Stack Exchange sites are literally licensed CC-BY-SA, the whole point of which is to enable them to be shared and used by anyone for any purpose. One of the purposes of such a license is to make sure knowledge is preserved by allowing everyone to make and share copies.

      • @bbuez@lemmy.world
        link
        fedilink
        English
        1111 months ago

        It does help to know what those funny letters mean. Now we wait for regulators to catch up…

        /tangent

        If anything, we’re a very long way from anything close to intelligent, OpenAI (and subsequently MS, being publicly traded) sold investors on the pretense that LLMs are close to being “AGI” and now more and more data is necessary to achieving that.

        If you know the internet, you know there’s a lot of garbage. I for one can’t wait for garbage-in garbage-out to start taking its toll.

        Also I’m surprised how well open source models have shaped up, its certainly worth a look. I occasionally use a local model for “brainstorming” in the loosest terms, as I generally know what I’m expecting, but it’s sometimes helpful to read tasks laid out. Also comfort in that nothing even need leave my network, and even in a pinch I got some answers when my network was offline.

        It gives a little hope while corps get to blatantly violate copyright while having wielding it so heavily, that advancements have been so great in open source.

    • @kerrigan778@lemmy.world
      link
      fedilink
      English
      10611 months ago

      That license would require chatgpt to provide attribution every time it used training data of anyone there and also would require every output using that training data to be placed under the same license. This would actually legally prevent anything chatgpt created even in part using this training data from being closed source. Assuming they obviously aren’t planning on doing that this is massively shitting on the concept of licensing.

      • @theherk@lemmy.world
        link
        fedilink
        English
        411 months ago

        Maybe but I don’t think that is well tested legally yet. For instance, I’ve learned things from there, but when I share some knowledge I don’t attribute it to all the underlying sources of my knowledge. If, on the other hand, I shared a quote or copypasta from there I’d be compelled to do so I suppose.

        I’m just not sure how neural networks will be treated in this regard. I assume they’ll conveniently claim that they can’t tie answers directly to underpinning training data.

        • @kerrigan778@lemmy.world
          link
          fedilink
          English
          2011 months ago

          Ethically and logically it seems like output based on training data is clearly derivative work. Legally I suspect AI will continue to be the new powerful tool that enables corporations to shit on and exploit the works of countless people.

          • @fruitycoder@sh.itjust.works
            link
            fedilink
            English
            211 months ago

            The problem is the legal system and thus IP law enforcement is very biased towards very large corporations. Until that changes corporations will continue, as they already were, exploiting.

            I don’t see AI making it worse.

        • @General_Effort@lemmy.world
          link
          fedilink
          English
          111 months ago

          They are not. A derivative would be a translation, or theater play, nowadays, a game, or movie. Even stuff set in the same universe.

          Expanding the meaning of “derivative” so massively would mean that pretty much any piece of code ever written is a derivative of technical documentation and even textbooks.

          So far, judges simply throw out these theories, without even debating them in court. Society would have to move a lot further to the right, still, before these ideas become realistic.

      • JohnEdwa
        link
        fedilink
        English
        25
        edit-2
        11 months ago

        CC attribution doesn’t require you to necessarily have the credits immediately with the content, but it would result in one of the world’s longest web pages as it would need to have the name of the poster and a link to every single comment they used as training data, and stack overflow has roughly 60 million questions and answers combined.

        • @Scrollone@feddit.it
          link
          fedilink
          English
          111 months ago

          They don’t need to republish the 60 million questions, they just have to credit the authors, which are surely way fewer (but IANAL)

          • JohnEdwa
            link
            fedilink
            English
            111 months ago

            appropriate credit — If supplied, you must provide the name of the creator and attribution parties, a copyright notice, a license notice, a disclaimer notice, and a link to the material. CC licenses prior to Version 4.0 also require you to provide the title of the material if supplied, and may have other slight differences.

            Maybe that could be just a link to the user page, but otherwise I would see it as needing to link to each message or comment they used.

  • athos77
    link
    fedilink
    2011 months ago

    For years, the site had a standing policy that prevented the use of generative AI in writing or rewording any questions or answers posted. Moderators were allowed and encouraged to use AI-detection software when reviewing posts. Beginning last week, however, the company began a rapid about-face in its public policy towards AI.

    I listened to an episode of The Daily on AI, and the stuff they fed into to engines included the entire Internet. They literally ran out of things to feed it. That’s why YouTube created their auto-generated subtitles - literally, so that they would have more material to feed into their LLMs. I fully expect reddit to be bought out/merged within the next six months or so. They are desperate for more material to feed the machine. Everything is going to end up going to an LLM somewhere.

    • elgordio
      link
      fedilink
      911 months ago

      I think auto generated subtitles were to fulfil a FCC requirement, some years ago, for content subtitling. It has however turned out super useful for LLM feeding.

    • @stoly@lemmy.world
      link
      fedilink
      English
      211 months ago

      There really isn’t much in the way of detection. It’s a big problem in schools and universities and the plagiarism detectors can’t sense AI.

  • @0oWow@lemmy.world
    link
    fedilink
    English
    -211 months ago

    Anyone care to explain why people would care that they posted to a public forum that they don’t own, with content that is now further being shared for public benefit?

    The argument that it’s your content becomes false as soon as you shared it with the world.

    • @Emotet@slrpnk.net
      link
      fedilink
      English
      3711 months ago

      It’s not shared for public benefit, though. OpenAI, despite the Open in their name, charges for access to their models. You either pay with money or (meta)data, depending on the model.

      Legally, sure. You signed away your rights to your answers when you joined the forum. Morally, though?

      People are pissed that SO, that was actively encouraging Mods to use AI detection software to prevent any LLM usage in the posted questions and answers, are now selling the publicly accessible data, made by their users for free, to a closed-source for-profit entity that refuses to open itself up.

      Basically the same story as with reddit.

      • @golli@lemm.ee
        link
        fedilink
        English
        1111 months ago

        Agreed. As you said it’s a similar situation as with reddit, where I decided to delete my comments.

        My reasoning is that those contributions were given under the premise that everybody was sharing to help each other.

        Now that premise has changed: the large tech companies are only taking and the platform providers are changing the rules aswell to profit from it.

        So as a result I packed my things and left, in case of reddit to here.

        That said I think both views are valid and I wouldn’t fault those that think differently.

    • @gencha@lemm.ee
      link
      fedilink
      English
      311 months ago

      It is your content. But SE specifically only accepts CC licensed content, which makes you right.

    • TheOneCurly
      link
      fedilink
      English
      4411 months ago

      I can only really speak to reddit, but I think this applies to all of the user generated content websites. The original premise, that everyone agreed to, was the site provides a space and some tools and users provide content to fill it. As information gets added, it becomes a valuable resource for everyone. Ads and other revenue streams become a necessary evil in all this, but overall directly support the core use case.

      Now that content is being packaged into large language models to be either put behind a paywall or packed into other non-freely available services. Since they no longer seem interested in supporting the model we all agreed on, I see no reason to continue adding value and since they provided tools to remove content I may as well use them.

      • @0oWow@lemmy.world
        link
        fedilink
        English
        111 months ago

        But from the very beginning years ago, it was understood that when you post on these types of sites, the data is not yours, or at least you give them license to use it how they see fit. So for years people accepted that, but are now whining because they aren’t getting paid for something they gave away.

        • TheOneCurly
          link
          fedilink
          English
          1
          edit-2
          11 months ago

          This is legal vs rude. It certainly is legal and was in the terms of service for them to use the data in any way they see fit. But, also it’s rude to bait and switch from being a message board to being an AI data source company. Users we led to believe they were entering into an agreement with one type of company and are now in an agreement with a totally different one.

          You can smugly tell people they shouldn’t have made that decision 15 years ago when they started, but a little empathy is also cool.

          Additionally: When you owe your entire existence and value to user goodwill it might not be a great idea to be rude to them.

      • Possibly linux
        link
        fedilink
        English
        -311 months ago

        Well no, when you post something it is public and out of your control

        • @LainTrain@lemmy.dbzer0.com
          link
          fedilink
          English
          2
          edit-2
          11 months ago

          No, you can’t post something in public and have it appropriated by a mega corp for money and then prevent you from deleting or modifying the very things you posted.

          I’m pro-AI btw. But AI for all.

  • FaceDeer
    link
    fedilink
    611 months ago

    This sort of thing is so self-sabotaging. The website already has your comment, and a license to use it. By deleting your stuff from the web you only ensure that the AI is definitely going to be the better resource to go to for answers.

    • @Rolando@lemmy.world
      link
      fedilink
      English
      1811 months ago

      I’m not sure about that… in Europe don’t you have the right to insist that a website no longer use your content?

      • @000@fuck.markets
        link
        fedilink
        English
        911 months ago

        Not when you’ve agreed to a terms of service that hands over ownership of your content to Stack Overflow, leaving you merely licensed to use your own content.

      • @Z3k3@lemmy.world
        link
        fedilink
        English
        011 months ago

        That’s an interesting point. I winder how llms handle gdpr would it be like having a tiny piece of your brain cut out

    • @BraveLittleToaster@lemmy.world
      link
      fedilink
      English
      1211 months ago

      Also backups and deleted flags. Whatever comment you submitted is likely backed up already and even if you click the delete button you’re likely only just changing a flag.

      • @gencha@lemm.ee
        link
        fedilink
        English
        111 months ago

        I feel like a lot of people don’t understand the most basic things about the site. Any user with enough internet points can see deleted posts.

  • @unreasonabro@lemmy.world
    link
    fedilink
    English
    16311 months ago

    See, this is why we can’t have nice things. Money fucks it up, every time. Fuck money, it’s a shitty backwards idea. We can do better than this.

  • kubica
    link
    fedilink
    5611 months ago

    I’m going to run out of sites at this pace.

    • @herrcaptain@lemmy.ca
      link
      fedilink
      English
      4511 months ago

      Right? It seems like the modern internet is made up of like 5 monolithic sites, and unlimited SEO spam.

      I know that’s not literally true, but it sure feels like it.

    • FaceDeer
      link
      fedilink
      -111 months ago

      Fortunately the AIs are getting quite good at answering technical questions like these.

  • @trailee@sh.itjust.works
    link
    fedilink
    English
    1111 months ago

    They seem to only be watching the questions right now. You’re automatically prevented from deleting an accepted answer, but if you answered your own question (maybe because SO was useless for certain niche questions a decade ago so you kept digging and found your own solution), you can unaccept your answer first and then delete it.

    I got a 30 day ban for “defacing” a few of my 10+ year old questions after moderators promptly reverted the edits. But they seem to have missed where I unaccepted and deleted my answers, even as they hang out in an undeletable state (showing up red for me and hidden for others).

    And comments, which are a key part to properly understanding a lot of almost-correct answers, don’t seem to be afforded revision history or to have deletes noticed by moderators.

    So it seems like you can still delete a bunch of your content, just not the questions. Do with that what you will.

  • @Bell@lemmy.world
    link
    fedilink
    English
    18311 months ago

    Take all you want, it will only take a few hallucinations before no one trusts LLMs to write code or give advice

    • FaceDeer
      link
      fedilink
      1211 months ago

      Maybe for people who have no clue how to work with an LLM. They don’t have to be perfect to still be incredibly valuable, I make use of them all the time and hallucinations aren’t a problem if you use the right tools for the job in the right way.

      • @stonerboner@lemmynsfw.com
        link
        fedilink
        English
        211 months ago

        This. I use LLM for work, primarily to help create extremely complex nested functions.

        I don’t count on LLM’s to create anything new for me, or to provide any data points. I provide the logic, and explain exactly what I want in the end.

        I take a process which normally takes 45 minutes daily, test it once, and now I have reclaimed 43 extra minutes of my time each day.

        It’s easy and safe to test before I apply it to real data.

        It’s missed the mark a few times as I learned how to properly work with it, but now I’m consistently getting good results.

        Other use cases are up for debate, but I agree when used properly hallucinations are not much of a problem. When I see people complain about them, that tells me they’re using the tool to generate data, which of course is stupid.

        • Aniki 🌱🌿
          link
          fedilink
          English
          111 months ago

          This is how I use it as well. I also have it write tests with the code I give it.

        • @VirtualOdour@sh.itjust.works
          link
          fedilink
          English
          111 months ago

          Yeah, it’s an obvious sign they’re either not coders at all or don’t understand the tech at all.

          Asking it direct questions or to construct functions with given inputs and outputs can save hours, especially with things that disrupt the main flow of coding - I don’t want to empty the structure of what I’m working on from my head just so I can remember everything needed to do something somewhat trivial like calculate the overlapping volume of two tetrahedrons. Of course I could solve it myself but just reading through the suggestion it offers and getting back to solving the real task is so much nicer.

      • @barsquid@lemmy.world
        link
        fedilink
        English
        2311 months ago

        The last time I saw someone talk about using the right LLM tool for the job, they were describing turning two minutes of writing a simple map/reduce into one minute of reading enough to confirm the generated one worked. I think I’ll pass on that.

        • @JDubbleu@programming.dev
          link
          fedilink
          English
          111 months ago

          That’s a 50% time reduction for the same output which sounds great to me.

          I’d much rather let an LLM do the menial shit with my validation while I focus on larger problems such as system and API design, or creating rollback plans for major upgrades instead of expending mental energy writing something that has been written a thousand times. They’re not gonna rewrite your entire codebase, but they’re incredibly useful for the small stuff.

          I’m not even particularly into LLMs, and they’re definitely not gonna change the world in the way big tech would like you to believe. However, to deny their usefulness is silly.

          • @barsquid@lemmy.world
            link
            fedilink
            English
            111 months ago

            It’s not a consistent 50%, it’s 50% off one task that’s so simple it takes two minutes. I’m not doing enough of that where shaving off minutes is helpful. Maybe other people are writing way more boilerplate than I am or something.

            • @JDubbleu@programming.dev
              link
              fedilink
              English
              111 months ago

              Those little things add up though, and it’s not just good at boilerplate. Also just having a more intelligent context-aware auto complete itself I’ve found to be super valuable.

        • @linearchaos@lemmy.world
          link
          fedilink
          English
          1911 months ago

          confirm the generated one worked. I think I’ll pass on tha

          LLM wasn’t the right tool for the job, so search engine companies made their search engines suck so bad that it was an acceptable replacement.

          • @NuXCOM_90Percent@lemmy.zip
            link
            fedilink
            English
            1311 months ago

            Honestly? I think search engines are actually the best use for LLMs. We just need them to be “explainable” and actually cite things.

            Even going back to the AOL days, Ask Jeeves was awesome and a lot of us STILL write our google queries in question form when we aren’t looking for a specific factoid. And LLMs are awesome for parsing those semi-rambling queries like “I am thinking of a book. It was maybe in the early 00s? It was about a former fighter pilot turned ship captain leading the first FTL expedition and he found aliens and it ended with him and humanity fighting off an alien invasion on Earth” and can build on queries to drill down until you have the answer (Evan Currie’s Odyssey One, by the way).

            Combine that with citations of what page(s) the information was pulled from and you have a PERFECT search engine.

            • @notabot@lemm.ee
              link
              fedilink
              English
              1211 months ago

              That may be your perfect search engine, I jyst want proper boolean operators on a sesrch engine that doesn’t think it knows what I want better than I do, and doesn’t pack the results out with pages that don’t match all the criteria just for the sake of it. The sort of thing you described would be anathema to me, as I suspect my preferred option may be to you.

            • @linearchaos@lemmy.world
              link
              fedilink
              English
              111 months ago

              They are VERY VERY good at search engine work with a few caveats that we’ll eventually nail. The problem is, they’re WAY to expensive for that purpose. Single queries take tons of compute and power. Constant training on new data takes boatloads of power.

              They’re the opposite of efficient; eventually, they’ll have to start charging you a subscription to search with them to stay in business.

            • @Grandwolf319@sh.itjust.works
              link
              fedilink
              English
              111 months ago

              So my company said they might use it to improve confluence search, I was like fuck yeah! Finally a good use.

              But to be fair, that’s mostly because confluence search sucks to begin with.

        • @Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          511 months ago

          Yeah, every time someone says how useful they find LLM for code I just assume they are doing the most basic shit (so far it’s been true).

    • @NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      -1811 months ago

      We already have those near constantly. And we still keep asking queries.

      People assume that LLMs need to be ready to replace a principle engineer or a doctor or lawyer with decades of experience.

      This is already at the point where we can replace an intern or one of the less good junior engineers. Because anyone who has done code review or has had to do rounds with medical interns know… they are idiots who need people to check their work constantly. An LLM making up some functions because they saw it in stack overflow but never tested is not at all different than a hotshot intern who copied some code from stack overflow and never tested it.

      Except one costs a lot less…

      • NaibofTabr
        link
        fedilink
        English
        47
        edit-2
        11 months ago

        This is already at the point where we can replace an intern or one of the less good junior engineers.

        This is a bad thing.

        Not just because it will put the people you’re talking about out of work in the short term, but because it will prevent the next generation of developers from getting that low-level experience. They’re not “idiots”, they’re inexperienced. They need to get experience. They won’t if they’re replaced by automation.

        • @ipkpjersi@lemmy.ml
          link
          fedilink
          English
          4
          edit-2
          11 months ago

          First a nearly unprecedented world-wide pandemic followed almost immediately by record-breaking layoffs then AI taking over the world, man it is really not a good time to start out as a newer developer. I feel so fortunate that I started working full-time as a developer nearly a decade ago.

          • @morrowind@lemmy.ml
            link
            fedilink
            English
            311 months ago

            Dude the pandemic was amazing for devs, tech companies hiring like mad, really easy to get your foot in the door. Now, between all the layoffs and AI it is hellish

            • @ipkpjersi@lemmy.ml
              link
              fedilink
              English
              111 months ago

              I think it depends on where you live. Hiring didn’t go crazy where I live, but the layoffs afterwards sure did.

      • @assassin_aragorn@lemmy.world
        link
        fedilink
        English
        1111 months ago

        This is already at the point where we can replace an intern or one of the less good junior engineers. Because anyone who has done code review or has had to do rounds with medical interns know… they are idiots who need people to check their work constantly.

        Do so at your own peril. Because the thing is, a person will learn from their mistakes and grow in knowledge and experience over time. An LLM is unlikely to do the same in a professional environment for two big reasons:

        1. The company using the LLM would have to send data back to the creator of the LLM. This means their proprietary work could be at risk. The AI company could scoop them, or a data leak would be disastrous.

        2. Alternatively, the LLM could self-learn and be solely in house without any external data connections. A company with an LLM will never go for this, because it would mean their model is improving and developing out of their control. Their customized version may end up being better than their the LLM company’s future releases. Or, something might go terribly wrong with the model while it learns and adapts. If the LLM company isn’t held legally liable, they’re still going to lose that business going forward.

        On top of that, you need your inexperienced noobs to one day become the ones checking the output of an LLM. They can’t do that unless they get experience doing the work. Companies already have proprietary models that just require the right inputs and pressing a button. Engineers are still hired though to interpret the results, know what inputs are the right ones, and understand how the model works.

        A company that tries replacing them with LLMs is going to lose in the long run to competitors.

        • @NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          011 months ago

          Actually, nvidia recently announced RAG (Retrieval-Augmented Generation). Basically the idea is that you take an “off the shelf” LLM and then feed your local instance sensitive corporate data. It can then use that information in its responses.

          So you really are “teaching” it every time you do a code review of the AI’s merge request and say “Well… that function doesn’t exist” or “you didn’t use useful variable names” and so forth. Which… is a lot more than I can say about a lot of even senior or principle engineers I have worked with over the years who are very much making mistakes that would get an intern assigned to sorting crayons.

          Which, again, gets back to the idea of having less busywork. Less grunt work. Less charlie work. Instead, focus on developers who can actually contribute to a team and design meetings.

          And the model I learned early in my career that I bring to every firm is to have interns be a reward for talented engineers and not a punishment for people who weren’t paying attention in Nose Goes. Teaching a kid to write a bunch of utility functions does nothing they didn’t learn (or not learn) in undergrad but it is a necessary evil… that an AI can do.

          Instead, the people who are good at their jobs and contributing to the overall product? They probably have ideas they want to work on but don’t have the cycles to flesh out. That is where interns come into play. They work with those devs and other staff and learn what it means to actually be part of a team. They get to work on really cool projects and their mentors get to ALSO work on really cool projects but maybe focus more on the REALLY interesting parts and less on the specific implementation.

          And result is that your interns are now actually developers who are worth a damn.

          Also: One of the most important things to teach a kid is that they owe the company nothing. If they aren’t getting the raise they feel they deserve then they need to be updating their linkedin and interviewing elsewhere. That is good for the worker. And that also means that the companies that spend a lot of money training up grunts? They will lose them to the companies who are desperate for people who can lead projects and contribute to designs but haven’t been wasting money on writing unit tests.

      • @LucidNightmare@lemmy.world
        link
        fedilink
        English
        3011 months ago

        So, the whole point of learning is to ask questions from people who know more than you, so that you can gain the knowledge you need to succeed…

        So… if you try to use these LLMs to replace parts of sectors, where there need to be people that can work their way to the next tier as they learn more and get better at their respective sectors, you do realize that eventually there will no longer be people that can move up their respective tier/position, because people like you said “Fuck ‘em, all in on this stupid LLM bullshit!” So now there are no more doctors, or real programmers, because people like you thought it would just be the GREATEST idea to replace humans with fucking LLMs.

        You do see that, right?

        Calling people fucking stupid, because they are learning, is actually pretty fucking stupid.

        • @NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          -17
          edit-2
          11 months ago

          Where did I say “Fuck 'em, all in on this stupid LLM bullshit!”?

          But yes, there is a massive labor issue coming. That is why I am such a proponent of Universal Basic Income because there are not going to be enough jobs out there.

          But as for training up the interns: Back in the day, do you know what “interns” did? And by “interns” I mean women because sexism but roll with me. Printing out and sorting punch cards. Compilers and general technical advances got rid of those jobs and pushed up where the “charlie work” goes.

          These days? There are good internships/junior positions and bad ones. A good one actually teaches skills and encourages the worker to contribute. A bad one has them do the mindless grunt work that nobody else wants to. LLMs get rid of the latter.

          And… I actually think that is good for the overall health of workers, if not the number (again, UBI). Because if someone can’t be trusted to write meaningful code without copying it off the internet and not even updating variable names? I don’t want to work with them. I spend too much of my workday babysitting those morons who are just here there to get some work experience so they can con their way into a different role and be someone else’s problem.

          And experience will be gained the way it is increasingly being gained. Working on (generally open source) projects and interviewing for competitive internships where the idea is to take relatively low cost workers and have them work on a low ROI task that is actually interesting. It is better for the intern because they learn actual development and collaboration skills. And it is better for the staff because it is a way to let people work on the stuff they actually want to do without the massive investment of a few hundred hours of a Senior Engineer’s time.

          And… there will be a lot fewer of those roles. Just like there were a lot fewer roles for artists as animation tools stopped requiring every single cell of animation to be hand drawn. And that is why we need to decouple life from work through UBI.

          But also? If we have less internships that consist of “okay. good job. thanks for that. Next time can you at least try and compile your code? or pay attention to the squiggly red lines in your IDE? or listen to the person telling you that is wrong?”? Then we have better workers and better junior developers who can actually do more meaningful work. And we’ll actually need to update the interviewing system to not just be “did you memorize this book of questions from Amazon?” and we’ll have fewer “hot hires” who surprise everyone by being able to breath unassisted but have a very high salary because they worked for facebook.

          Because, and here is the thing: LLMs are already as good, if not better than, an intern or junior engineer. And the companies that spend money on training up interns aren’t going to be rewarded. Under capitalism, there is no reason to “take one for the team” so that your competition can benefit.

    • @kibiz0r@midwest.social
      link
      fedilink
      English
      1011 months ago

      The quality really doesn’t matter.

      If they manage to strip any concept of authenticity, ownership or obligation from the entirety of human output and stick it behind a paywall, that’s pretty much the whole ball game.

      If we decide later that this is actually a really bullshit deal – that they get everything for free and then sell it back to us – then they’ll surely get some sort of grandfather clause because “Whoops, we already did it!”

    • @sramder@lemmy.world
      link
      fedilink
      English
      8311 months ago

      […]will only take a few hallucinations before no one trusts LLMs to write code or give advice

      Because none of us have ever blindly pasted some code we got off google and crossed our fingers ;-)

      • @Hackerman_uwu@lemmy.world
        link
        fedilink
        English
        411 months ago

        When you paste that code you do it in your private IDE, in a dev environment and you test it thoroughly before handing it off to the next person to test before it goes to production.

        Hitting up ChatPPT for the answer to a question that you then vomit out in a meeting as if it’s knowledge is totally different.

        • @sramder@lemmy.world
          link
          fedilink
          English
          211 months ago

          Which is why I used the former as an example and not the latter.

          I’m not trying to make a general case for AI generated code here… just poking fun at the notion that a few errors will put people off using it.

      • @Seasm0ke@lemmy.world
        link
        fedilink
        English
        311 months ago

        Split segment of data without pii to staging database, test pasted script, completely rewrite script over the next three hours.

      • Avid Amoeba
        link
        fedilink
        English
        83
        edit-2
        11 months ago

        It’s way easier to figure that out than check ChatGPT hallucinations. There’s usually someone saying why a response in SO is wrong, either in another response or a comment. You can filter most of the garbage right at that point, without having to put it in your codebase and discover that the hard way. You get none of that information with ChatGPT. The data spat out is not equivalent.

        • deweydecibel
          link
          fedilink
          English
          3111 months ago

          That’s an important point, and and it ties into the way ChatGPT and other LLMs take advantage of a flaw in the human brain:

          Because it impersonates a human, people are more inherently willing to trust it. To think it’s “smart”. It’s dangerous how people who don’t know any better (and many people that do know better) will defer to it, consciously or unconsciously, as an authority and never second guess it.

          And the fact it’s a one on one conversation, no comment sections, no one else looking at the responses to call them out as bullshit, the user just won’t second guess it.

    • @antihumanitarian@lemmy.world
      link
      fedilink
      English
      711 months ago

      Have you tried recent models? They’re not perfect no, but they can usually get you most of the way there if not all the way. If you know how to structure the problem and prompt, granted.

      • @Amanduh@lemm.ee
        link
        fedilink
        English
        311 months ago

        Yeah but if you’re not feeding it protected code and just asking simple questions for libraries etc then it’s good

      • @Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        311 months ago

        I feel like it had to cause an actual disaster with assets getting destroyed to become part of common knowledge (like the challenger shuttle or something).

      • @Cubes@lemm.ee
        link
        fedilink
        English
        511 months ago

        If you use LLMs in your professional work, you’re crazy

        Eh, we use copilot at work and it can be pretty helpful. You should always check and understand any code you commit to any project, so if you just blindly paste flawed code (like with stack overflow,) that’s kind of on you for not understanding what you’re doing.

        • @Spedwell@lemmy.world
          link
          fedilink
          English
          311 months ago

          The issue on the copyright front is the same kind of professional standards and professional ethics that should stop you from just outright copying open-source code into your application. It may be very small portions of code, and you may never get caught, but you simply don’t do that. If you wouldn’t steal a function from a copyleft open-source project, you wouldn’t use that function when copilot suggests it. Idk if copilot has added license tracing yet (been a while since I used it), but absent that feature you are entirely blind to the extent which it’s output is infringing on licenses. That’s huge legal liability to your employer, and an ethical coinflip.


          Regarding understanding of code, you’re right. You have to own what you submit into the codebase.

          The drawback/risks of using LLMs or copilot are more to do with the fact it generates the likely code, which means it’s statistically biased to generate whatever common and unnoticeable bugged logic exists in the average github repo it trained on. It will at some point give you code you read and say “yep, looks right to me” and then actually has a subtle buffer overflow issue, or actually fails in an edge case, because in a way that is just unnoticeable enough.

          And you can make the argument that it’s your responsibility to find that (it is). But I’ve seen some examples thrown around on twitter of just slightly bugged loops; I’ve seen examples of it replicated known vulnerabilities; and we have that package name fiasco in the that first article above.

          If I ask myself would I definitely have caught that? the answer is only a maybe. If it replicates a vulnerability that existed in open-source code for years before it was noticed, do you really trust yourself to identify that the moment copilot suggests it to you?

          I guess it all depends on stakes too. If you’re generating buggy JavaScript who cares.

    • capital
      link
      fedilink
      English
      9
      edit-2
      11 months ago

      People keep saying this but it’s just wrong.

      Maybe I haven’t tried the language you have but it’s pretty damn good at code.

      Granted, whatever it puts out needs to be tested and possibly edited but that’s the same thing we had to do with Stack Overflow answers.

      • @VirtualOdour@sh.itjust.works
        link
        fedilink
        English
        211 months ago

        I use it all the time and it’s brilliant when you put in the basic effort to learn how to use it effectively.

        It’s allowing me and other open source devs to increase the scope and speed of our contributions, just talking through problems is invaluable. Greedy selfish people wanting to destroy things that help so many is exactly the rolling coal mentality - fuck everyone else I don’t want the world to change around me! Makes me so despondent about the future of humanity.

  • Hypx
    link
    fedilink
    6611 months ago

    Eventually, we will need a fediverse version of StackOverflow, Quora, etc.

    • Thomas
      link
      fedilink
      English
      7711 months ago

      Those would be harvested to train LLMs even without asking first. 😐

      • mox
        link
        fedilink
        English
        1011 months ago

        Assuming the federated version allowed contributor-chosen licenses (similar to GitHub), any harvesting in violation of the license would be subject to legal action.

        Contrast that with Stack Exchange, where I assume the terms dictated by Stack Exchange deprive contributors of recourse.

      • @linearchaos@lemmy.world
        link
        fedilink
        English
        3411 months ago

        Honestly? I’m down with that. And when the LLM’s end up pricing themselves out of usefulness, we’ll still have the fediverse version. Having free sites on the net with solid crowd-sourced information is never a bad thing even if other people pick up the data and use it.

        It’s when private sites like Duolingo and Reddit crowd source the information and then slowly crank down the free aspect that we have the problems.

        The Ad sponsored web model is not viable forever.

      • @Rolando@lemmy.world
        link
        fedilink
        English
        211 months ago

        But users and instances would be able to state that they do not want their content commercialized. On StackOverflow you have no control over that.

      • @sramder@lemmy.world
        link
        fedilink
        English
        4511 months ago

        At this point I’m assuming most if not all of these content deals are essentially retroactive. They already scrapped the content and found it useful enough to try and secure future use, or at least exclude competitors.

        • Ricky Rigatoni
          link
          fedilink
          English
          1311 months ago

          They scraped the content, liked the results, and are only making these deals because it’s cheaper than getting sued.

          • @AeroLemming@lemm.ee
            link
            fedilink
            English
            311 months ago

            Can they really sue (with a chance of winning) if you scrape content that’s submitted by users? That’s insane.

      • chameleon
        link
        fedilink
        711 months ago

        SO already was. Not even harvested as much as handed to them. Periodic data dumps and a general forced commitment to open information were a big part of the reason they won out over other sites that used to compete with them. SO most likely wouldn’t have existed if Experts Exchange didn’t paywall their entire site.

        As with everything else, AI companies believe their training data operates under fair use, so they will discard the CC-SA-4.0 license requirements regardless of whether this deal exists. (And if a court ever finds it’s not fair use, they are so many layers of fucked that this situation won’t even register.)

      • @VirtualOdour@sh.itjust.works
        link
        fedilink
        English
        411 months ago

        Yeah but didn’t you see the sovereign citizens who think licenses are magic posting giant copyright notices after their posts? Lol

        It’s so childish, ai tools will help billions of the poorest people access life saving knowledge and services, help open source devs like myself create tools that free people from the clutches of capitalism, but they like living in a world of inequity because their generational wealth earned from centuries of exploitation of the impoverished allows them a better education, better healthcare, and better living standards than the billions of impoverished people on the planet so they’ll fight to maintain their privilege even if they’re fighting against their own life getting better too. The most pathetic thing is they pretend to be fighting a moral crusade, as if using the answers they freely posted and never expected anything in return for is a real injustice!

        And yes I know people are going to pretend that they think tech bros won’t allow poor people to use their tech and they base this on assuming how everything always works will suddenly just flip Into reverse at some point or something? Like how mobile phones are only for rich people and only rich people can sell via the internet and only rich people can start a YouTube channel…

      • Avid Amoeba
        link
        fedilink
        English
        4
        edit-2
        11 months ago

        Oh this looks decent. British non-profit, I like it. Registering.

      • @linearchaos@lemmy.world
        link
        fedilink
        English
        311 months ago

        Smells too much like duo-lingo. Here, everyone jump in and answers all the questions. 5 years later, ohh look at this gold mine of community data we own…

        • @residentmarchant@lemmy.world
          link
          fedilink
          English
          811 months ago

          This was actually the whole original point of Duolingo. The founder previously created Recaptcha to crowd source machine vision of scanned books.

          His whole thing is crowd sourcing difficult tasks that machines struggle with by providing some sort of reason to do it (prevent spam at first and learn a language now)

          From what I understand Duolingo just got too popular and the subscription service they offer made them enough money to be happy with.

          • @linearchaos@lemmy.world
            link
            fedilink
            English
            111 months ago

            Duolingo has been systematically enshittifying the free/ad supported service. Now every time you fart, you get a big unskippable ad trying to get you to subscribe to their service for free for 14 days without telling you the price. They took all that crowdsourced data that weren’t going to profit off of and are making the app a miserable experience without it.

    • Avid Amoeba
      link
      fedilink
      English
      311 months ago

      We already have the SO data. We could populate such a tool with it and start from there.