Can AI help counter terrorism? ….Warisha Rashid
Artificial intelligence (AI) is a technology capable of performing complex tasks that were traditionally performed only by humans, such as problem-solving, reasoning and decision-making.
Machine learning, a subset of AI, focuses on algorithms that can improve the performance of specific tasks by learning from given data. These algorithms analyze patterns and draw conclusions from patterns in a dataset, eliminating the need for human intervention. Consequently, there has been a growing reliance on advanced technologies to bolster surveillance and threat detection capabilities.
An illustrative example of this utilization can be observed in the methods employed by China, which has leveraged AI to identify individuals deemed as potential threats to national security. However, the efficacy of AI models in identifying and pre-empting potential terrorist activities hinges upon the availability of information about certain behaviour of individuals to identify potential terrorists and predict their future activities.
Following the Chinese model, Pakistan can also leverage advanced technologies, particularly artificial intelligence, to enhance national security measures and counter the threat of terrorism. For instance,, INSIKT Intelligence is a startup in the US that has employed the use of social media analyses and other information to find possible online threats. Similar tools of AI technology can be modified for use in Pakistan to examine online terrorist behaviour in greater detail.
AI can also be used to identify people who might become radical on the internet. It is possible to spot potential indicators of radicalization by analyzing online activities by using machine learning techniques, such as natural language processing (NLP). It plays a key role through automated text analysis that allows for identifying language, emotions and ideas. NLP can recognize subtle hints in language that can indicate a shift towards radical thinking…
By constantly learning from new data, these AI systems can improve their accuracy over time, becoming more adept at distinguishing harmless expressions of opinion from indicators of extremism and radicalization. An example of a tool created for this purpose while adhering to stringent privacy and security standards is the EU-funded Real-time Early Detection and Alert System for Online Terrorist Content (RED-Alert) project. ……
Mis/disinformation spread by terrorists on social media poses a serious threat to national security. It is commonly assumed that terrorist groups can create an environment conducive to terrorism, through the spread of false information that causes fear and uncertainty, which destabilizes communities and makes them easier to manipulate. They also undermine trust in authorities by circulating fake news about government actions, weakening societal cohesion, and making extremist narratives more appealing.
For recruitment, terrorists spread distorted ideological narratives and highlight perceived injustices to attract and radicalize individuals who feel marginalized. This spread of false information helps create an environment conducive to terrorism by polarizing society and increasing tensions, normalizing extremist views over time, and facilitating coordination of terrorist activities under the guise of false narratives.
Such information cannot be shared online at a lower cost. Therefore, misinformation is largely disseminated by bots or online programmes that carry out repetitive tasks. A 2017 study found that there were 140 million bots on Facebook, about 27 million on Instagram, and 23 million on Twitter. Propaganda on social media can be automatically distributed by groups such as ISIL, which have demonstrated proficiency in using bots.
However, websites like Snopes.com can be used to verify the reliability of sources and identify hate speech and disinformation for combating the significant percentage of misinformation and fake news spread by terrorists.
Apart from protecting online spaces, Pakistan can leverage the installation of biometric verification systems, such as the Safe City Project. This initiative can be expanded to integrate biometric systems and AI-driven monitoring solutions by installing high-definition CCTV cameras equipped with facial recognition and biometric scanning capabilities at key public locations such as pedestrian crossings, transport hubs, and crowded markets.
By integrating these biometric systems with AI, the data can be compared against international watchlists and databases to quickly identify potential threats. This approach can significantly enhance national security by enabling real-time identification and monitoring.
In the past, Pakistan practiced such initiatives by using Skynet which systematically analyzed metadata among the country’s 55 million Pakistani mobile phone users to demonstrate terrorist activities with an error of up to only 0.008 per cent.
AI and ML offer tremendous potential for combating terrorism in Pakistan through a variety of innovative approaches. Security can be significantly increased by using biometric verification systems.
Leveraging AI’s power of reasoning, problem-solving, etc, coupled with ML’s ability to learn from data, security agencies can develop predictive models of potential terrorist activities and forms of online radicalization.
Thus, Pakistan can step up its counter-terrorism efforts by ensuring ethical considerations and respecting privacy and civil liberties.
The writer is a researcher at the
Centre for Aerospace and Security Studies (CASS), Lahore. She can be reached at: info@casslhr.com
Courtesy The News