• @pjwestin@lemmy.world
    link
    fedilink
    English
    07 months ago

    I really don’t understand why they’re simultaneously arguing that they need access to copyrighted works in order to train their AI while also dropping their non-profit status. If they were at least ostensibly a non-profit, they could pretend that their work was for the betterment of humanity or whatever, but now they’re basically saying, “exempt us from this law so we can maximize our earnings.” …and, honestly, our corrupt legislators wouldn’t have a problem with that were it not for the fact that bigger corporations with more lobbying power will fight against it.

  • @Kyrgizion@lemmy.world
    link
    fedilink
    English
    227 months ago

    Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.

    I hope he gets raped by an irate Roomba with a broomstick.

  • @kippinitreal@lemmy.world
    link
    fedilink
    English
    54
    edit-2
    7 months ago

    Putting my tin foil hat on… Sam Altman knows the AI train might be slowing down soon.

    The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.

    The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.

    This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.

    • @dan@upvote.au
      link
      fedilink
      English
      0
      edit-2
      7 months ago

      It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.

  • barnaclebutt
    link
    fedilink
    English
    577 months ago

    I’m sure they were dead weight. I trust open AI completely and all tech gurus named Sam. Btw, what happened to that Crypto guy? He seemed so nice.

  • @N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    1457 months ago

    There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.

    Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.

    Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.

    • @patatahooligan@lemmy.world
      link
      fedilink
      English
      17 months ago

      No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.

      • @mustbe3to20signs@feddit.org
        link
        fedilink
        English
        07 months ago

        AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
        Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.

        • @msage@programming.dev
          link
          fedilink
          English
          07 months ago

          Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?

          • @mustbe3to20signs@feddit.org
            link
            fedilink
            English
            0
            edit-2
            7 months ago

            There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
            Since multiple medical image recognition systems are in development, I can’t imagine they’re all this faulty trained with unsuitable materials.

            • @msage@programming.dev
              link
              fedilink
              English
              17 months ago

              They are not ‘faulty’, they have been fed wrong training data.

              This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.

              That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.

  • @halcyoncmdr@lemmy.world
    link
    fedilink
    English
    1797 months ago

    You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.

    /s

    • nickwitha_k (he/him)
      link
      fedilink
      English
      207 months ago

      They are typically closed-loop for home computers. Datacenters are a different beast and a fair amount of open-loop systems seem to be in place.

      • @boonhet@lemm.ee
        link
        fedilink
        English
        07 months ago

        But even then, is the water truly consumed? Does it get contaminated with something like the cooling water of a nuclear power plant? Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?

        There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.

        • @JamesFire@lemmy.world
          link
          fedilink
          English
          57 months ago

          Does it get contaminated with something like the cooling water of a nuclear power plant?

          This doesn’t happen unless the reactor was sabotaged. Cooling water that interacts with the core is always a closed-loop system. For exactly this reason.

  • @NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    70
    edit-2
    7 months ago

    Altman downplayed the major shakeup.

    "Leadership changes are a natural part of companies

    Is he just trying to tell us he is next?

    /s