AI – The Dawn of Skynet?

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.” – Sybil Sage (New York Times)

 

The relentless march of technology and the rapid advances in computer processing power have enabled industry experts to claim that the world is entering a new era of Artificial Intelligence (AI).  But is this really a new and pioneering frontier for technology or is it simply a marketing opportunity to sell ever more powerful computers.  Will AI have profound implications for defence, and if so, how?

AI is not new.  Alan Turing published his work “Computer Machinery and Intelligence” in 1950 which proposed a test of machine intelligence called the Imitation Game; the term “artificial intelligence” was born.

However, concerns grew over the potential for machines to eventually out-think humans. The Terminator series of the 1980s starring Arnold Schwarzenegger explored the potential for computers to become self-aware; the SKYNET AI developed by Cyberdyne Tech would eventually challenge human control in “Judgement Day”.

Although some experts predict that computers might eventually be developed to the point of becoming self-aware, in the near to medium term the main benefit of this pioneering technology is the ability to process huge amounts of data fast and reliably.  Computers do not need sleep or a regular rest, they can process vast amounts of data without being subject to human error, and they can provide an objective assessment to inform crucial decisions.

AI offers the potential to automate tasks, streamline processes, and ensure maximum efficiency.  AI can also augment human capabilities by providing insights and predictions and improve the accuracy of target recognition and identification in combat environments, particularly with autonomous systems.

But, AI algorithms can reflect bias, and as systems become more autonomous so questions of accountability arise, as do compliance with ethical norms and international law.  Where does accountability for errors lie?

This all sounds very academic, so how does it relate to specific examples?  Where can AI add real value, and where does society (and specifically defence) need to tread carefully?

During my military career, I served 3 tours in Afghanistan where one of the most dangerous threats facing our soldiers were IEDs.  The Taliban would identify a road or track where Western forces would patrol, then under the cover of darkness dig a hole in the road, bury a couple of Soviet-era anti-tank mines, and then cover them over.  To the naked eye it was almost impossible to see where such threats were located.

However, despite the best efforts of the Taliban, the profile of the road where the mine had been laid had changed.  Not noticeably – at least to the human eye – but imperceptibly.  By conducting detailed scans of the road network and then – using AI technology – to compare “before and after” road profiles, it was possible to detect changes, and thus potential locations for IEDs.

Similarly, the earlier that breast cancer can be detected, the better the prognosis for a full recovery.  Human analysis of breast scans is vulnerable to human error, but AI provides a far more detailed analysis of pixel-by-pixel data to enhance detection rates.

The ability of AI to process huge volumes of data reliably, swiftly and accurately has profound implications for society – both in military and civilian applications.

But can AI make judgements?

Judgement – a key human attribute grounded in experience and professionalism – is not easily captured in a computer algorithm.  Human Intelligence – the ability to learn, understand, and adapt to new situations – encompasses both cognitive and emotional aspects, especially regarding abstract matters.

So how does human intelligence relate to our ability to make informed judgements?

During the Iraq war in 2003, UK forces were involved in the ground invasion phase and regularly required air support to defeat opposing Iraqi land forces.  Coordinating these short-notice urgent requests required an Air Support Element (ASE) which maintained an air picture of available fighter assets and their associated weapon-load to decide what air assets were best placed to provide air support and allocate resources accordingly.

Close Air Support – the air-delivery of weapons in close proximity to friendly forces – is a dangerous undertaking, and there were (and are) strict rules governing the circumstances when Release Authority could be granted to the fighter pilot to enable live weapons to be released. On one night during the land campaign, UK ground forces were under attack from Iraqi artillery, which appeared to know the “bearing” of the UK forces, but not their range.  Therefore, the Iraqi artillery was “walking” rounds steadily towards the UK forces hiding in the sand dunes.

The UK forces were terrified – their fear was reflected in the tone and language of their radio transmitted air support request.  The fighter allocated by the ASE to provide support could see the Iraqi artillery, but due to the nighttime conditions and lack of ambient light they could not see the UK forces hiding in the desert.  The ROE thus precluded the fighter releasing weapons as there was a heightened risk that they might harm the very forces they were designed to protect.

However, it was also evident that doing nothing – adhering to the rules – would probably result in the Iraqis eventually successfully striking UK forces.

Decades of operational experience required an informed judgment to be made.  Reassurance was sought from the fighter pilot allocated to the mission who was confident he had identified the enemy target, and Release Authority was granted by the Commanding Officer of the ASE – despite the public protestations of many ASE personnel who were acutely aware of the risks involved.

On this occasion, the weapons were effective, and the UK forces were saved.  Could AI have made such a decision?  Would AI have been overruled in the same circumstances?  Senior commanders rely on their experience, teamwork and judgement to make operational decisions – will AI enhance their capability or pose more questions than answers?

Two days after this incident, a small group of UK soldiers arrived at our ASE tent to offer their thanks for the vital air support provided – and for saving their lives.  It was a moving and poignant moment for all concerned.

Away from the front-line, AI offers enormous potential to improve efficiency and effectiveness, and the UK has significant intellectual investment in this exciting new frontier of defence capability.  However, as Einstein once opined, “not everything of value can be measured, and not everything that can be measured is of value”; AI can assess measurables at an increasingly fast pace, but when it comes to commander’s judgements, AI has yet to earn its spurs.

 

“Before we work on artificial intelligence why don’t we do something about natural stupidity?” – Tom Chatfield

 

If you enjoyed this piece, click here to read Sean’s previous episode on the military challenges facing the UK and listen to Sean’s podcast series ‘InDefence’ here on Spotify. 

Author

  • Sean Bell

    Sean Bell enjoyed a first career in the RAF where he flew in Sarajevo, Bosnia, Iraq and Afghanistan. Since 2022 Sean has been providing military analysis for Sky News and other media outlets. He is also the co-host and founder of the RedMatrix Podcast.

    View all posts