Both the concept and plans behind building military robots have been around for a while, and UAV's are currently invaluable for long-range bombardment, and reconnaissance missions. Robots have also been invaluable in hazardous jobs such as bomb defusal as well.
But only recently has artificial intelligence been weaponised in this way. SAPIENT is a project by the British Ministry of Defence to use A.I. in real-time threat identification, monitoring camera feeds to “reduce the risk of human error.” This means monitoring the activity of people, especially around sensitive areas like checkpoints, bases and embassies and deciding when to call attention to a variety of threats, or even the risk of one.
This technology is also going to be available to Britain’s closest allies, the rest of the Five Eyes, the USA, Australia, New Zealand and Canada. This implications of this are immense. Multitudes of technology started out being researched and applied in a military setting before civilian applications were found. Examples include nuclear technology, microwaves and even the internet,its predecessor, ARPANET, began as a way to centralise and upgrade military command and control using a network of computers and the then novel idea of packet switching before people realised that instantaneous communication between a series of computers had other applications other than the management of war-making. Now A.I. has been added to this prestigious list.
Ethical limitations aside, this will have a dramatic effect on A.I. development as the military is one of the biggest researchers in the world, with access to massive amounts of government money in the name of the national interest. The U.K. spent GBP 1.7 billion on military research and due to technology sharing, the Americans may be able to siphon money from their defence budget in order to aid development or develop A.I. of their own.
Also important is the precedent this has set. The government has now, indirectly placed computers in charge of making ethical decisions, many of which will inevitably be very grey and will almost certainly have lethal consequences.
Does A.I. have a place in war? Should it?
by Joey, Concord College