Spiesand technology

Machine intelligence

Intelligence agencies are grappling with the promiseand pitfallsof AI

 




Mar 6th2021 | words 750

 

 


 

WHEN IT comes to using artificial intelligence (AI), intelligence agencies have been at it longer than most. In the cold war Americas National Security Agency (NSA) and Britains Government Communications Headquarters (GCHQ) explored early AI to help transcribe and translate the enormous volumes of Soviet phone-intercepts they began hoovering up in the 1960s and 1970s.

 

Yet the technology was immature. One former European intelligence officer says his service did not use automatic transcription or translation in Afghanistan in the 2000s, relying on native speakers instead. Now the spooks are hoping to do better. The trends that have made AI attractive for business in recent yearsmore data, better algorithms, and more processing power to make it all humare giving spy agencies big ideas, too.

 

On February 24th GCHQ published a paper on how AI might change its work. Machine-assisted fact-checking could help spot faked images, check disinformation against trusted sources and identify social-media bots that spread it. AI might block cyber-attacks by analysing patterns of activity on networks and devices, and fight organised crime by spotting suspicious chains of financial transactions.

 

Other, less well-resourced organisations have already shown what is possible. The Nuclear Threat Initiative, an American NGO, recently showed that applying machine learning to publicly available trade data could spot previously unknown companies and organisations suspected of involvement in the illicit trade in materials for nuclear weapons. But spy agencies are not restricted to publicly available data.

 

Some hope that, aided by their ability to snoop on private information, such modest applications could pave the way to an AI-fuelled juggernaut. AI will revolutionise the practice of intelligence, gushed a report published on March 1st by Americas National Security Commission on Artificial Intelligence, a high-powered study group co-chaired by Eric Schmidt, a former executive chairman of Alphabet, Googles parent company, and Bob Work, a former deputy defence secretary.

 

The report does not lack ambition. It says that by 2030 Americas 17 or so spy agencies ought to have built a "federated architecture of continually learning analytic engines" capable of crunching everything from human intelligence to satellite imagery in order to to foresee looming threats. The commission points approvingly to the Pentagons response to covid-19, which integrated dozens of data sets to identify covid-19 hotspots and manage demand for supplies.

 

Yet what is possible in public health is not always so easy in national security. Western intelligence agencies must contend with laws governing how private data may be gathered and used. In its paper, GCHQ says that it will be mindful of systemic bias, such as whether voice-recognition software is more effective with some groups than others, and transparent about margins of error and uncertainty in its algorithms. American spies say, more vaguely, that they will respect human dignity, rights, and freedoms. These differences may need to be ironed out. One suggestion made by a recent task-force of former American spooks in a report published by the Centre for Strategic and International Studies (CSIS) in Washington was that the Five Eyes intelligence allianceAmerica, Australia, Britain, Canada and New Zealandcreate a shared cloud server on which to store data.

 

In any case, the constraints facing AI in intelligence are as much practical as ethical. Machine learning is good at spotting patternssuch as distinctive patterns of mobile-phone usebut poor at predicting individual behaviour. That is especially true when data are scarce, as in counter-terrorism. Predictive-policing models can crunch data from thousands of burglaries each year. Terrorist attacks are much rarer, and therefore harder to learn from.

 

That rarity creates another problem, familiar to medics pondering mass-screening programmes for rare diseases. Any predictive model will generate false positives, in which innocent people are flagged for investigation. Careful design can drive the false-positive rate down. But because the "base rate" is lower stillthere are, mercifully, very few terroristseven a well-designed system risks sending large numbers of spies off on wild-goose chases.

 

And those data that do exist may not be suitable. Data from drone cameras, reconnaissance satellite and intercepted phone calls, for instance, are not currently formatted or labelled in ways that are useful for machine learning. Fixing that is a tedious, time-consuming, and still primarily human task exacerbated by differing labelling standards across and even within agencies, notes the CSIS report. That may not be quite the sort of work that would-be spies signed up for.








\n

Economist | Machine intelligence

 

Spiesand technology

Machine intelligence

Intelligence agencies are grappling with the promiseand pitfallsof AI

 




Mar 6th2021 | words 750

 

 


 

WHEN IT comes to using artificial intelligence (AI), intelligence agencies have been at it longer than most. In the cold war Americas National Security Agency (NSA) and Britains Government Communications Headquarters (GCHQ) explored early AI to help transcribe and translate the enormous volumes of Soviet phone-intercepts they began hoovering up in the 1960s and 1970s.

 

Yet the technology was immature. One former European intelligence officer says his service did not use automatic transcription or translation in Afghanistan in the 2000s, relying on native speakers instead. Now the spooks are hoping to do better. The trends that have made AI attractive for business in recent yearsmore data, better algorithms, and more processing power to make it all humare giving spy agencies big ideas, too.

 

On February 24th GCHQ published a paper on how AI might change its work. Machine-assisted fact-checking could help spot faked images, check disinformation against trusted sources and identify social-media bots that spread it. AI might block cyber-attacks by analysing patterns of activity on networks and devices, and fight organised crime by spotting suspicious chains of financial transactions.

 

Other, less well-resourced organisations have already shown what is possible. The Nuclear Threat Initiative, an American NGO, recently showed that applying machine learning to publicly available trade data could spot previously unknown companies and organisations suspected of involvement in the illicit trade in materials for nuclear weapons. But spy agencies are not restricted to publicly available data.

 

Some hope that, aided by their ability to snoop on private information, such modest applications could pave the way to an AI-fuelled juggernaut. AI will revolutionise the practice of intelligence, gushed a report published on March 1st by Americas National Security Commission on Artificial Intelligence, a high-powered study group co-chaired by Eric Schmidt, a former executive chairman of Alphabet, Googles parent company, and Bob Work, a former deputy defence secretary.

 

The report does not lack ambition. It says that by 2030 Americas 17 or so spy agencies ought to have built a "federated architecture of continually learning analytic engines" capable of crunching everything from human intelligence to satellite imagery in order to to foresee looming threats. The commission points approvingly to the Pentagons response to covid-19, which integrated dozens of data sets to identify covid-19 hotspots and manage demand for supplies.

 

Yet what is possible in public health is not always so easy in national security. Western intelligence agencies must contend with laws governing how private data may be gathered and used. In its paper, GCHQ says that it will be mindful of systemic bias, such as whether voice-recognition software is more effective with some groups than others, and transparent about margins of error and uncertainty in its algorithms. American spies say, more vaguely, that they will respect human dignity, rights, and freedoms. These differences may need to be ironed out. One suggestion made by a recent task-force of former American spooks in a report published by the Centre for Strategic and International Studies (CSIS) in Washington was that the Five Eyes intelligence allianceAmerica, Australia, Britain, Canada and New Zealandcreate a shared cloud server on which to store data.

 

In any case, the constraints facing AI in intelligence are as much practical as ethical. Machine learning is good at spotting patternssuch as distinctive patterns of mobile-phone usebut poor at predicting individual behaviour. That is especially true when data are scarce, as in counter-terrorism. Predictive-policing models can crunch data from thousands of burglaries each year. Terrorist attacks are much rarer, and therefore harder to learn from.

 

That rarity creates another problem, familiar to medics pondering mass-screening programmes for rare diseases. Any predictive model will generate false positives, in which innocent people are flagged for investigation. Careful design can drive the false-positive rate down. But because the "base rate" is lower stillthere are, mercifully, very few terroristseven a well-designed system risks sending large numbers of spies off on wild-goose chases.

 

And those data that do exist may not be suitable. Data from drone cameras, reconnaissance satellite and intercepted phone calls, for instance, are not currently formatted or labelled in ways that are useful for machine learning. Fixing that is a tedious, time-consuming, and still primarily human task exacerbated by differing labelling standards across and even within agencies, notes the CSIS report. That may not be quite the sort of work that would-be spies signed up for.








\n

No comments:

Post a Comment