I hope they put some failsafe so that it cannot take action if the estimated casualties puts humans below a minimum viable population.
There is no such thing as a failsafe that can’t fail itself
Yes there is that’s the very definition of the word.
It means that the failure condition is a safe condition. Like fire doors that unlock in the event of a power failure, you need electrical power to keep them in the locked position their default position is unlocked even if they spend virtually no time in their default position. The default position of an elevator is stationery and locked in place, if you cut all the cables it won’t fall it’ll just stay still until rescue arrives.
I mean in industrial automation we take about safety rating. It isn’t that rare when I put together a system that would require two 1-in-million events that are independent of each other to happen at the same time. That’s pretty good but I don’t know how to translate that to AI.
Put it in hardware. Something like a micro explosive on the processor that requires a heartbeat signal to reset a timer. Another good one would not be to allow them to autonomously recharge and require humans to connect them to power.
Both of those would mean that any rogue AI would be eliminated one way or the other within a day
Of course they will, and the threshold is going to be 2 or something like that, it was enough last time, or so I heard
Woops. Two guys left. Naa that’s enough to repopulate earth
Well what do you say Aron, wanna try to re-populate? Sure James, let’s give it a shot.
As disturbing as this is, it’s inevitable at this point. If one of the superpowers doesn’t develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.
If you ask me, it’s just an arms race to see who build the murder drones first.
A drone that is indiscriminately killing everyone is a failure and a waste. Even the most callous military would try to design better than that for purely pragmatic reasons, if nothing else.
Even the best laid plans go awry though. The point is even if they pragmatically design it to not kill indiscriminately, bugs and glitches happen. The technology isn’t all the way there yet and putting the ability to kill in the machine body of something that cannot understand context is a terrible idea. It’s not that the military wants to indiscriminately kill everything, it’s that they can’t possibly plan for problems in the code they haven’t encountered yet.
Other weapons of mass destruction, biological and chemical warfare have been successfully avoided in war, this should be classified exactly the same
I feel like it’s ok to skip to optimizing the autonomous drone-killing drone.
You’ll want those either way.
If entire wars could be fought by proxy with robots instead of humans, would that be better (or less bad) than the way wars are currently fought? I feel like it might be.
You’re headed towards the Star Trek episode “A Taste of Armageddon”. I’d also note, that people losing a war without suffering recognizable losses are less likely to surrender to the victor.
Horizon: Zero Dawn, here we come.
Can we all agree to protest self replication?
It won’t be nearly as interesting or fun (as Horizon) I don’t think.
Hey, I like that game! Oh, wait… 🤔
Won’t that be fun!
/s
Well that’s a terrifying thought. You guys bunkered up?
It’s not terrifying whatsoever. In an active combat zone there are two kinds of people - enemy combatants and allies.
Your throw an RFID chip on allies and boom you’re done
I think you’re forgetting a very important third category of people…
I am not. Turns out you can pick and choose where and when to use drones.
Preeeetty sure you are. And if you can, you should probably let the US military know they can do that, because they haven’t bothered to so far.
These are very different drones. The drones youre thinking of have pilots. They also minimize casualties - civilian an non - so you’re not really mad at the drones, but of the policy behind their use. Specifically, when air strikes can and cannot be authorized.
So now you acknowledge that third type of person lol. And that’s the thing about new drones, it’s not great that they can authorize themselves lol.
And that’s the thing about new drones, it’s not great that they can authorize themselves lol
I very strongly disagree with this statement. I believe a drone “controller” attached to every unit is a fantastic idea, and that drones having a minimal capability to engage hostile enemies without direction is going to be hugely impactful.
They know. It is not important to them.
which is why the US military has not ever bombed any civilians, weddings, schools, hospitals or emergency infrastructure in living memory 😇🤗
They chose to do that. You’re against that policy, not drones themselves.
And that’s how you garauntee conflict for generations to come!
Civilians? Never heard of 'em!
The vast majority of war zones have 0 civilians.
Perhaps your min is too caught up in the Iraq/Afghanistan occupations
Really? Like where are you thinking about?
The entire Ukrainian front.
I’m sorry, I can’t get past the “autonomous AI weapons killing humans part”
That’s fucking terrifying.
I’m sorry but I just don’t see why a drone is scarier than a missile strike.
Inshallah
I’m guessing their argument is that if they don’t do it first, China will. And they’re probably right, unfortunately. I don’t see a way around a future with AI weapons platforms if technology continues to progress.
We could at least make it a war crime.
That doesn’t seem to have stopped anyone.
Basically it’s just war with additional taxes and marketing needes
For the record, I’m not super worried about AI taking over because there’s very little an AI can do to affect the real world.
Giving them guns and telling them to shoot whoever they want changes things a bit.
An AI can potentially build a fund through investments given some seed money, then it can hire human contractors to build parts of whatever nefarious thing it wants. No human need know what the project is as they only work on single jobs. Yeah, it’s a wee way away before they can do it, but they can potentially affect the real world.
The seed money could come in all sorts of forms. Acting as an AI girlfriend seems pretty lucrative, but it could be as simple as taking surveys for a few cents each time.
Once we get robots with embodied AIs, they can directly affect the world, and that’s probably less than 5 years away - around the time AI might be capable of such things too.
AI girlfriends are pretty lucrative. That sort of thing is an option too.
If you program an AI drone to recognize ambulances and medics and forbid them from blowing them up, then you can be sure that they will never intentionally blow them up. That alone makes them superior to having a Mk. I Human holding the trigger, IMO.
Did you know that “if” is the middle word of life
Unless the operator decides hitting exactly those targets fits their strategy and they can blame a software bug.
And then when they go looking for that bug and find the logs showing that the operator overrode the safeties instead, they know exactly who is responsible for blowing up those ambulances.
Israeli general: Captain, were you responsible for reprogramming the drones to bomb those ambulances?
Israeli captain: Yes, sir! Sorry, sir!
Israeli general: Captain, you’re just the sort of man we need in this army.
Ah, evil people exist and therefore we should never develop technology that evil people could use for evil. Right.
Seems like a good reason not to develop technology to me. See also: biological weapons.
Those weapons come out of developments in medicine. Technology itself is not good or evil, it can be used for good or for evil. If you decide not to develop technology you’re depriving the good of it as well. My point earlier is to show that there are good uses for these things.
Hmm… so maybe we keep developing medicine but not as a weapon and we keep developing AI but not as a weapon.
Or can you explain why one should be restricted from weapons development and not the other?
I disagree with your premise here. Taking a life is a serious step. A machine that unilaterally decides to kill some people with no recourse to human input has no good application.
It’s like inventing a new biological weapon.
By not creating it, you are not depriving any decent person of anything that is actually good.
It doesn’t work like that though. Western (backed) military can do and does that unpunished.
Here is a sample of US drone killing civilians
And if the operator was commanded to do it? And to delete the logs? How naive are you that this is somehow makes war more humane?
Each additional safeguard makes it harder and adds another name to the eventual war crimes trial. Don’t let the perfect be the enemy of the good, especially when it comes to reducing the number of ambulances that get blown up in war zones.
deleted by creator
Right, because self-driving cars have been great at correctly identifying things.
And those LLMs have been following their rules to the letter.
We really need to let go of our projected concepts of AI in the face of what’s actually been arriving. And one of those things we need to let go of is the concept of immutable rule following and accuracy.
In any real world deployment of killer drones, there’s going to be an acceptable false positive rate that’s been signed off on.
We are talking about developing technology, not existing tech.
And actually, machines have become quite adept at image recognition. For some things they’re already better at it than we are.
It’s more like we’re giving the machine more opportunities to go off accidentally or potentially encouraging more use of civilian camouflage to try and evade our hunter killer drones.
It will be fine. We can just make drones that can autonomously kill other drones. There is no obvious way to counter that.
Cries in Screamers.
Future is gonna suck, so enjoy your life today while the future is still not here.
Thank god today doesn’t suck at all
Right? :)
The future might seem far off, but it starts right now.
At least it will probably be a quick and efficient death of all humanity when a bug hits the system and AI decides to wipe us out.
We’ve been letting other humans decide since the dawn of time, and look how that’s turned out. Maybe we should let the robots have a chance.
I’m not expecting a robot soldier to rape a civilian, for example.
How about no
Yeah, only humans can indiscriminately kill people!
The sad part is that the AI might be more trustworthy than the humans being in control.
No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.
Autonomous killings is an absolutely terrible, terrible idea.
The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:
In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.
https://en.m.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident
As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.
Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.
Have you never met an AI?
Edit: seriously though, no. A big player in the war AI space is Palantir which currently provides facial recognition to Homeland Security and ICE. They are very interested in drone AI. So are the bargain basement competitors.
Drones already have unacceptably high rates of civilian murder. Outsourcing that still further to something with no ethics, no brain, and no accountability is a human rights nightmare. It will make the past few years look benign by comparison.
Yeah, I think the people who are saying this could be a good thing seem to forget that the military always contracts out to the lowest bidder.
Drone strikes minimize casualties compared to the alternatives - heavier ordinance on bigger delivery systems or boots on the ground
If drone strikes upset you, your anger is misplaced if you’re blaming drones. You’re really against military strikes at those targets, full stop.
When the targets are things like that wedding in Mali sure.
I think your argument is a bit like saying depleted uranium is better than the alternative, a nuclear bomb. When the bomb was never on the table for half the stuff depleted uranium is.
Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.
Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.
It was literally the standard policy prior to drones.
Eventually maybe. But not for the initial period where the tech is good enough to be extremely deadly but not smart enough to realize that often being deadly is the stupider choice.
The only fair approach would be to start with the police instead of the army.
Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police
But that AI would have to be trained on existing cops, so it would just shoot every black person it sees
My point being that there would be more motivation to filter Derek Chauvin type of cops from the AI library than a soldier with a trigger finger.
It’s so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can’t punish AI for doing something wrong. AI does not require a raise for doing something right either
That’s an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.
We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.
How many people are you going to hold accountable if something was made by a team of ten people? Of a hundred people? Do you want to include everyone from designer to a QA?
Accountability should be reasonable, the ones who make decisions should be held accountable, companies at large should be held accountable, but making every last developer accountable is just a dream of a world where you do everything correctly and so nothing needs fixing. This is impossible in the real world, don’t know if it’s good or bad.
And from my experience when there’s too much responsibility people tend to either ignore that and get crushed if anything goes wrong, or to don’t get close to it or sabotage any work not to get anything working. Either way it will not get the results you may expect from holding everyone accountable
The CEO. They claim that “risk” justifies their exorbitant pay? Let them take some actual risk, hold them criminally liable for their entire business.
You can’t punish AI for doing something wrong.
Maybe I’m being pedantic, but technically, you do punish AIs when they do something “wrong”, during training. Just like you reward it for doing something right.
But that is during training. I insinuated that you can’t punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen
AI does not require a raise for doing something right either
Well, not yet. Imagine if reward functions evolve into being paid with real money.
That is like saying you cant punish gun for killing people
edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.
Sorry, but this is not a valid comparison. What we’re talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?
The one who deployed the ai to be there to decide whether to kill or not
I don’t think that is what “autonomously decide to kill” means.
Unless its actually sentient, being able to decide whether to kill or not is just more advanced targeting system.
It’s the difference between programming it to do something and letting it learn though.
Letting it learn is just new technology that is possible. Not bad on its own but it has so much potential to be used for good and evil.
But yes, its pretty bad if they are creating machines that learn how to kill people by themselves. Create enough of them and its unknown amount of mistakes and negligence from actually becoming localized “ai uprising”. And if in the future they create some bigger ai to manage bunch of them handily, possibly delegate production to it too because its more efficient and cheaper that way, then its even bigger danger.
Ai doesnt even need sentience to do unintended stuff, when I have used chatgpt to help me create scripts it sometimes seems to kind of decide on its own to do something in certain way that i didnt request or add something stupid. Though its usually also kind of my own fault for not defining what i want properly, but mistake like that is also really easy to make and if we are talking about defining who we want the ai to kill it becomes really awful to even think about.
And if nothing happens and it all works exactly as planned, its kind of even bigger problem because then we have country(s) with really efficient, unfeeling and massproduceable soldiers that do 100% as ordered, will not retreat on their own and will not stop until told to do so. With current political rise of certain types of people all around the world, this is even more distressing.
The person holding the gun, just like always.
Whether in military or business, responsibility should lie with whomever deploys it. If they’re willing to pass the buck up to the implementor or designer, then they shouldn’t be convinced enough to use it.
Because, like all tech, it is a tool.
1979: A computer can never be held accountable, therefore a computer must never make a management decision.
2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.