• @5BC2E7@lemmy.world
    link
    fedilink
    English
    111 year ago

    I hope they put some failsafe so that it cannot take action if the estimated casualties puts humans below a minimum viable population.

      • Echo Dot
        link
        fedilink
        English
        41 year ago

        Yes there is that’s the very definition of the word.

        It means that the failure condition is a safe condition. Like fire doors that unlock in the event of a power failure, you need electrical power to keep them in the locked position their default position is unlocked even if they spend virtually no time in their default position. The default position of an elevator is stationery and locked in place, if you cut all the cables it won’t fall it’ll just stay still until rescue arrives.

      • @afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        21 year ago

        I mean in industrial automation we take about safety rating. It isn’t that rare when I put together a system that would require two 1-in-million events that are independent of each other to happen at the same time. That’s pretty good but I don’t know how to translate that to AI.

        • Echo Dot
          link
          fedilink
          English
          31 year ago

          Put it in hardware. Something like a micro explosive on the processor that requires a heartbeat signal to reset a timer. Another good one would not be to allow them to autonomously recharge and require humans to connect them to power.

          Both of those would mean that any rogue AI would be eliminated one way or the other within a day

    • lad
      link
      fedilink
      English
      41 year ago

      Of course they will, and the threshold is going to be 2 or something like that, it was enough last time, or so I heard

  • Dizzy Devil Ducky
    link
    fedilink
    English
    181 year ago

    As disturbing as this is, it’s inevitable at this point. If one of the superpowers doesn’t develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.

    If you ask me, it’s just an arms race to see who build the murder drones first.

    • FaceDeer
      link
      fedilink
      81 year ago

      A drone that is indiscriminately killing everyone is a failure and a waste. Even the most callous military would try to design better than that for purely pragmatic reasons, if nothing else.

      • @SomeSphinx@lemmy.world
        link
        fedilink
        English
        21 year ago

        Even the best laid plans go awry though. The point is even if they pragmatically design it to not kill indiscriminately, bugs and glitches happen. The technology isn’t all the way there yet and putting the ability to kill in the machine body of something that cannot understand context is a terrible idea. It’s not that the military wants to indiscriminately kill everything, it’s that they can’t possibly plan for problems in the code they haven’t encountered yet.

    • @KeenFlame@feddit.nu
      link
      fedilink
      English
      2
      edit-2
      1 year ago

      Other weapons of mass destruction, biological and chemical warfare have been successfully avoided in war, this should be classified exactly the same

    • @Pheonixdown@lemm.ee
      link
      fedilink
      English
      61 year ago

      I feel like it’s ok to skip to optimizing the autonomous drone-killing drone.

      You’ll want those either way.

      • threelonmusketeers
        link
        fedilink
        English
        11 year ago

        If entire wars could be fought by proxy with robots instead of humans, would that be better (or less bad) than the way wars are currently fought? I feel like it might be.

        • @Pheonixdown@lemm.ee
          link
          fedilink
          English
          41 year ago

          You’re headed towards the Star Trek episode “A Taste of Armageddon”. I’d also note, that people losing a war without suffering recognizable losses are less likely to surrender to the victor.

  • Pirky
    link
    fedilink
    English
    341 year ago

    Horizon: Zero Dawn, here we come.

    • @SCB@lemmy.world
      link
      fedilink
      English
      -171 year ago

      It’s not terrifying whatsoever. In an active combat zone there are two kinds of people - enemy combatants and allies.

      Your throw an RFID chip on allies and boom you’re done

      • Encrypt-Keeper
        link
        fedilink
        English
        121 year ago

        I think you’re forgetting a very important third category of people…

          • Encrypt-Keeper
            link
            fedilink
            English
            41 year ago

            Preeeetty sure you are. And if you can, you should probably let the US military know they can do that, because they haven’t bothered to so far.

            • @SCB@lemmy.world
              link
              fedilink
              English
              -31 year ago

              These are very different drones. The drones youre thinking of have pilots. They also minimize casualties - civilian an non - so you’re not really mad at the drones, but of the policy behind their use. Specifically, when air strikes can and cannot be authorized.

              • Encrypt-Keeper
                link
                fedilink
                English
                31 year ago

                So now you acknowledge that third type of person lol. And that’s the thing about new drones, it’s not great that they can authorize themselves lol.

                • @SCB@lemmy.world
                  link
                  fedilink
                  English
                  -41 year ago

                  And that’s the thing about new drones, it’s not great that they can authorize themselves lol

                  I very strongly disagree with this statement. I believe a drone “controller” attached to every unit is a fantastic idea, and that drones having a minimal capability to engage hostile enemies without direction is going to be hugely impactful.

          • @funkless_eck@sh.itjust.works
            link
            fedilink
            English
            91 year ago

            which is why the US military has not ever bombed any civilians, weddings, schools, hospitals or emergency infrastructure in living memory 😇🤗

      • @rustyriffs@lemmy.world
        link
        fedilink
        English
        61 year ago

        I’m sorry, I can’t get past the “autonomous AI weapons killing humans part”

        That’s fucking terrifying.

  • Flying Squid
    link
    fedilink
    English
    61 year ago

    I’m guessing their argument is that if they don’t do it first, China will. And they’re probably right, unfortunately. I don’t see a way around a future with AI weapons platforms if technology continues to progress.

  • Yardy Sardley
    link
    fedilink
    English
    231 year ago

    For the record, I’m not super worried about AI taking over because there’s very little an AI can do to affect the real world.

    Giving them guns and telling them to shoot whoever they want changes things a bit.

    • @tinwhiskers@lemmy.world
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      An AI can potentially build a fund through investments given some seed money, then it can hire human contractors to build parts of whatever nefarious thing it wants. No human need know what the project is as they only work on single jobs. Yeah, it’s a wee way away before they can do it, but they can potentially affect the real world.

      The seed money could come in all sorts of forms. Acting as an AI girlfriend seems pretty lucrative, but it could be as simple as taking surveys for a few cents each time.

      Once we get robots with embodied AIs, they can directly affect the world, and that’s probably less than 5 years away - around the time AI might be capable of such things too.

      AI girlfriends are pretty lucrative. That sort of thing is an option too.

  • FaceDeer
    link
    fedilink
    -31 year ago

    If you program an AI drone to recognize ambulances and medics and forbid them from blowing them up, then you can be sure that they will never intentionally blow them up. That alone makes them superior to having a Mk. I Human holding the trigger, IMO.

    • GigglyBobble
      link
      fedilink
      101 year ago

      Unless the operator decides hitting exactly those targets fits their strategy and they can blame a software bug.

      • FaceDeer
        link
        fedilink
        -71 year ago

        And then when they go looking for that bug and find the logs showing that the operator overrode the safeties instead, they know exactly who is responsible for blowing up those ambulances.

        • Flying Squid
          link
          fedilink
          English
          31 year ago

          Israeli general: Captain, were you responsible for reprogramming the drones to bomb those ambulances?

          Israeli captain: Yes, sir! Sorry, sir!

          Israeli general: Captain, you’re just the sort of man we need in this army.

          • FaceDeer
            link
            fedilink
            0
            edit-2
            1 year ago

            Ah, evil people exist and therefore we should never develop technology that evil people could use for evil. Right.

            • Flying Squid
              link
              fedilink
              English
              31 year ago

              Seems like a good reason not to develop technology to me. See also: biological weapons.

              • FaceDeer
                link
                fedilink
                01 year ago

                Those weapons come out of developments in medicine. Technology itself is not good or evil, it can be used for good or for evil. If you decide not to develop technology you’re depriving the good of it as well. My point earlier is to show that there are good uses for these things.

                • Flying Squid
                  link
                  fedilink
                  English
                  31 year ago

                  Hmm… so maybe we keep developing medicine but not as a weapon and we keep developing AI but not as a weapon.

                  Or can you explain why one should be restricted from weapons development and not the other?

                • livus
                  link
                  fedilink
                  11 year ago

                  I disagree with your premise here. Taking a life is a serious step. A machine that unilaterally decides to kill some people with no recourse to human input has no good application.

                  It’s like inventing a new biological weapon.

                  By not creating it, you are not depriving any decent person of anything that is actually good.

        • mihies
          link
          fedilink
          51 year ago

          It doesn’t work like that though. Western (backed) military can do and does that unpunished.

        • GigglyBobble
          link
          fedilink
          11
          edit-2
          1 year ago

          And if the operator was commanded to do it? And to delete the logs? How naive are you that this is somehow makes war more humane?

          • FaceDeer
            link
            fedilink
            01 year ago

            Each additional safeguard makes it harder and adds another name to the eventual war crimes trial. Don’t let the perfect be the enemy of the good, especially when it comes to reducing the number of ambulances that get blown up in war zones.

    • @kromem@lemmy.world
      link
      fedilink
      English
      31 year ago

      Right, because self-driving cars have been great at correctly identifying things.

      And those LLMs have been following their rules to the letter.

      We really need to let go of our projected concepts of AI in the face of what’s actually been arriving. And one of those things we need to let go of is the concept of immutable rule following and accuracy.

      In any real world deployment of killer drones, there’s going to be an acceptable false positive rate that’s been signed off on.

      • FaceDeer
        link
        fedilink
        11 year ago

        We are talking about developing technology, not existing tech.

        And actually, machines have become quite adept at image recognition. For some things they’re already better at it than we are.

    • Chuck
      link
      fedilink
      English
      61 year ago

      It’s more like we’re giving the machine more opportunities to go off accidentally or potentially encouraging more use of civilian camouflage to try and evade our hunter killer drones.

  • @afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    111 year ago

    It will be fine. We can just make drones that can autonomously kill other drones. There is no obvious way to counter that.

    Cries in Screamers.

  • @1984@lemmy.today
    link
    fedilink
    English
    551 year ago

    Future is gonna suck, so enjoy your life today while the future is still not here.

  • @chemical_cutthroat@lemmy.world
    link
    fedilink
    English
    01 year ago

    We’ve been letting other humans decide since the dawn of time, and look how that’s turned out. Maybe we should let the robots have a chance.

    • FaceDeer
      link
      fedilink
      81 year ago

      I’m not expecting a robot soldier to rape a civilian, for example.

  • Silverseren
    link
    fedilink
    -31 year ago

    The sad part is that the AI might be more trustworthy than the humans being in control.

    • @Varyk@sh.itjust.works
      link
      fedilink
      English
      17
      edit-2
      1 year ago

      No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.

      Autonomous killings is an absolutely terrible, terrible idea.

      The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:

      In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.

      https://en.m.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident

      As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.

      • alternative_factor
        link
        fedilink
        51 year ago

        Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.

    • livus
      link
      fedilink
      4
      edit-2
      1 year ago

      Have you never met an AI?

      Edit: seriously though, no. A big player in the war AI space is Palantir which currently provides facial recognition to Homeland Security and ICE. They are very interested in drone AI. So are the bargain basement competitors.

      Drones already have unacceptably high rates of civilian murder. Outsourcing that still further to something with no ethics, no brain, and no accountability is a human rights nightmare. It will make the past few years look benign by comparison.

      • Flying Squid
        link
        fedilink
        English
        31 year ago

        Yeah, I think the people who are saying this could be a good thing seem to forget that the military always contracts out to the lowest bidder.

      • @SCB@lemmy.world
        link
        fedilink
        English
        11 year ago

        Drone strikes minimize casualties compared to the alternatives - heavier ordinance on bigger delivery systems or boots on the ground

        If drone strikes upset you, your anger is misplaced if you’re blaming drones. You’re really against military strikes at those targets, full stop.

        • livus
          link
          fedilink
          01 year ago

          When the targets are things like that wedding in Mali sure.

          I think your argument is a bit like saying depleted uranium is better than the alternative, a nuclear bomb. When the bomb was never on the table for half the stuff depleted uranium is.

          Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.

          • @SCB@lemmy.world
            link
            fedilink
            English
            11 year ago

            Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.

            It was literally the standard policy prior to drones.

    • @kromem@lemmy.world
      link
      fedilink
      English
      11 year ago

      Eventually maybe. But not for the initial period where the tech is good enough to be extremely deadly but not smart enough to realize that often being deadly is the stupider choice.

  • @cosmicrookie@lemmy.world
    link
    fedilink
    English
    101 year ago

    The only fair approach would be to start with the police instead of the army.

    Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police

    • Alex
      link
      fedilink
      English
      161 year ago

      But that AI would have to be trained on existing cops, so it would just shoot every black person it sees

      • @cosmicrookie@lemmy.world
        link
        fedilink
        English
        61 year ago

        My point being that there would be more motivation to filter Derek Chauvin type of cops from the AI library than a soldier with a trigger finger.

  • @cosmicrookie@lemmy.world
    link
    fedilink
    English
    55
    edit-2
    1 year ago

    It’s so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can’t punish AI for doing something wrong. AI does not require a raise for doing something right either

    • Meowing Thing
      link
      fedilink
      English
      331 year ago

      That’s an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.

      We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.

      • lad
        link
        fedilink
        English
        -41 year ago

        How many people are you going to hold accountable if something was made by a team of ten people? Of a hundred people? Do you want to include everyone from designer to a QA?

        Accountability should be reasonable, the ones who make decisions should be held accountable, companies at large should be held accountable, but making every last developer accountable is just a dream of a world where you do everything correctly and so nothing needs fixing. This is impossible in the real world, don’t know if it’s good or bad.

        And from my experience when there’s too much responsibility people tend to either ignore that and get crushed if anything goes wrong, or to don’t get close to it or sabotage any work not to get anything working. Either way it will not get the results you may expect from holding everyone accountable

        • @Ultraviolet@lemmy.world
          link
          fedilink
          English
          51 year ago

          The CEO. They claim that “risk” justifies their exorbitant pay? Let them take some actual risk, hold them criminally liable for their entire business.

    • @zalgotext@sh.itjust.works
      link
      fedilink
      English
      31 year ago

      You can’t punish AI for doing something wrong.

      Maybe I’m being pedantic, but technically, you do punish AIs when they do something “wrong”, during training. Just like you reward it for doing something right.

      • @cosmicrookie@lemmy.world
        link
        fedilink
        English
        31 year ago

        But that is during training. I insinuated that you can’t punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen

    • @synthsalad@mycelial.nexus
      link
      fedilink
      English
      31 year ago

      AI does not require a raise for doing something right either

      Well, not yet. Imagine if reward functions evolve into being paid with real money.

    • @reksas@lemmings.world
      link
      fedilink
      English
      0
      edit-2
      1 year ago

      That is like saying you cant punish gun for killing people

      edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

      • @cosmicrookie@lemmy.world
        link
        fedilink
        English
        41 year ago

        Sorry, but this is not a valid comparison. What we’re talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

            • @reksas@lemmings.world
              link
              fedilink
              English
              11 year ago

              Unless its actually sentient, being able to decide whether to kill or not is just more advanced targeting system.

                • @reksas@lemmings.world
                  link
                  fedilink
                  English
                  11 year ago

                  Letting it learn is just new technology that is possible. Not bad on its own but it has so much potential to be used for good and evil.

                  But yes, its pretty bad if they are creating machines that learn how to kill people by themselves. Create enough of them and its unknown amount of mistakes and negligence from actually becoming localized “ai uprising”. And if in the future they create some bigger ai to manage bunch of them handily, possibly delegate production to it too because its more efficient and cheaper that way, then its even bigger danger.

                  Ai doesnt even need sentience to do unintended stuff, when I have used chatgpt to help me create scripts it sometimes seems to kind of decide on its own to do something in certain way that i didnt request or add something stupid. Though its usually also kind of my own fault for not defining what i want properly, but mistake like that is also really easy to make and if we are talking about defining who we want the ai to kill it becomes really awful to even think about.

                  And if nothing happens and it all works exactly as planned, its kind of even bigger problem because then we have country(s) with really efficient, unfeeling and massproduceable soldiers that do 100% as ordered, will not retreat on their own and will not stop until told to do so. With current political rise of certain types of people all around the world, this is even more distressing.

        • ඞmir
          link
          fedilink
          English
          01 year ago

          The person holding the gun, just like always.

    • @recapitated@lemmy.world
      link
      fedilink
      English
      31 year ago

      Whether in military or business, responsibility should lie with whomever deploys it. If they’re willing to pass the buck up to the implementor or designer, then they shouldn’t be convinced enough to use it.

      Because, like all tech, it is a tool.

    • @Ultraviolet@lemmy.world
      link
      fedilink
      English
      171 year ago

      1979: A computer can never be held accountable, therefore a computer must never make a management decision.

      2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.