| by Skyler Stokes

News Stories

During the past decade, artificial intelligence has undergone a revolution that has led to its adoption by governments, companies, and individuals for uses from financial strategizing to mass surveillance. Its potential implications when used in counterterrorism policy have just begun to be explored, even as nations throughout the world race to incorporate the cutting-edge technology in military, law enforcement, and intelligence applications. This piece summarizes the current state of AI’s use in counterterrorism, and it investigates some of the known benefits and disadvantages of such implementations. It also highlights the numerous ethical and moral concerns that must be addressed before the technology can be responsibly used to combat terrorism and extremism.

On January 6, 2021, a large group of Trump supporters, QAnon adherents, and members of extremist movements rioted at the United States Capitol building as Congress was certifying the election of Joe Biden as the 46th President of the United States. The attack, which left five dead including one Capitol police officer, set an unfortunate precedent for how the rise of conspiracy theorists, extremists, and domestic terrorists can metastasize and spark violent activity in the United States.

This attack could have been prevented. Clear indicators suggested that thousands of Donald Trump supporters and violent extremists were planning to descend on Washington, D.C., for a pro-Trump rally geared toward protesting the election results and even stopping the certification from occurring.[1] What initially began as a peaceful—if conspiratorial—protest turned into a violent and deadly attack on the Capitol building, American lawmakers, and American democracy. This loose band of extremists, conspiracists, and mainstream Trump supporters committed an act of insurrection, an act that could and should have been prevented from escalating to the degree that it did.

Emerging technologies may have been able to mitigate the riot, as they can have the capacity to improve the capabilities of tools across a broad range of domains.[2] Specifically, artificial intelligence is considered to be one of the most versatile emerging technologies with a potential to exponentially increase the productivity and efficiency of various facets in an array of fields, such as medicine, agriculture, policing, and counterterrorism.[3]

In democracies, domestic terrorist groups typically aim to undermine the governments in which they operate.[4] These groups and movements commonly use tactics such as attacking civilian targets in an attempt to demonstrate the instability of the security of the government they are aiming to dismantle. Therefore, it is critical that the United States proactively work to prevent domestic terrorism attacks through the use of nuanced counterterrorism strategies. Artificial intelligence has the potential to be an effective tool in such a strategy.

Why AI Works

AI has proven to be both useful and lucrative for users across a wide array of industries. Based in highly technical and mathematical concepts, it is frequently misinterpreted, even by top business leaders and government officials.[5] Although experts’ definitions of artificial intelligence have some variance, their common thread focuses on computers’ ability to make decisions autonomously. This autonomy is a result of complex computer coding and algorithms that enable a computer to learn from its exposure to large amounts of data. This ultimately can impact many domains, such as healthcare, security, and transportation. Top tech companies have committed to pursuing the research and development of artificial intelligence in the private sector, while the U.S. government has invested heavily through programs at agencies like DARPA.[6] Over the next decade, AI has the potential to impact the everyday lives of people globally. However, unless the technology is harnessed in a way that is safe, effective, ethical, and lawful, there could be detrimental outcomes. In order to achieve this goal, there must be a holistic approach that includes a collaboration between the government, the tech industry, social scientists, and experts from other domains.

Artificial intelligence would not be possible without massive amounts of data.[7] AI algorithms operate through exposure to large datasets. During this training phase, these algorithms can extract specific trends and patterns latent within that data. AI’s biggest advantage comes from its ability to find patterns that would otherwise be indecipherable to human analysts. However, in order for these models to be effective, they must generally be trained on huge datasets that often require painstaking compilation, cleaning, and labeling by hand. This process, which remains reliant on manual labor, produces datasets that can have the markers of the specific people who compiled them. Subject matter experts play an important role in ensuring that the data fed to a model is robust, representative of the problem, and diverse. However, the data collection phase represents one of the greatest vulnerabilities for harm and abuse: if skewed or biased data is used for an AI tool, there could be serious ramifications for harmful discrimination and infringement of rights, as will be discussed later in this paper.

Existing Uses of AI in Counterterrorism

The intelligence community has employed AI for several years for a variety of tasks. Specifically, automated data analytics driven by AI and machine learning are already a critical tool in counterterrorism efforts. AI makes it possible for intelligence agencies to more quickly identify and prioritize suspects, analyze connections between suspects, and even utilize facial and voice recognition technologies to track persons of interest.[8] Outside of government, private technology companies employ similar models for use in monitoring their platforms for misuse and adversarial behavior. Large social media platforms have substantial experience in leveraging AI to track extremist rhetoric online. In her piece titled Artificial Intelligence Prediction and Counterterrorism, Kathleen McKendrick states, “As technology companies increasingly assume duties related to safeguarding their users, tools that identify suicide risk or vulnerability to mental health issues have possibilities for repurposing as tools that could assess susceptibility to violent extremist ideologies.”[9] In other words, social media companies are already using AI technology to assess users’ mental health—a tool that could easily be translated to also screen for metastasizing violent or hateful rhetoric that could indicate extremism.

There are several examples of how artificial intelligence has already impacted counterterrorism strategies. Specifically, artificial intelligence is already utilized in predictive counterterrorism to identify potential threats or aspects of terrorism that might be seen as concerning to the intelligence and law enforcement communities.[10] This can become complicated, as the actors who are capable of carrying out predictive counterterrorism strategies through the use of AI are frequently the actors that have access to copious amounts of data. Predictive counterterrorism tactics would be most useful to the intelligence and law enforcement communities, but because of the need for access to large amounts of data, these capabilities usually fall into the hands of tech, social media, and software companies. As digitization continues to grow, develop, and intertwine itself with all aspects of life, and as AI technologically improves, there will be more opportunities for predictive counterterrorism tactics from the use of AI. This suggests that AI as a counterterrorism strategy will only continue to grow in the future.

Potential Uses of AI in Domestic Counterterrorism

One potential future use of AI to combat domestic terrorism is the implementation of facial-recognition technology in specified public spaces, such as at government buildings, large-scale arenas like football stadiums, or public marketplaces. This is a controversial application of the technology, however, as ensuring compliance with privacy laws and the avoidance of mass surveillance are critical but have proven difficult for many organizations.

In 2015, the Islamic State (ISIS) attacked multiple venues across Paris, France, resulting in 130 casualties. One of the targets was a packed soccer stadium, the Stade de France, while a live soccer match was occurring. The president of France of the time, President Hollande, was in attendance at the match. A member of ISIS detonated a suicide belt after being refused entry at a security checkpoint at one of the stadium’s entries. The bomber and one civilian were killed. Had the bomber somehow passed through the security checkpoint, the consequences could have been far graver. Another one of the coordinated attacks occurred at a concert hall during a live music event. This attack was deadlier, leaving 89 innocent people dead.[11]

AI-driven surveillance cameras have the potential to identify and prevent these attacks from occurring through the automated analysis of high-volume video data. If these kinds of cameras were installed in large scale public arenas, such as sports stadiums or concert venues, security officials may be able to achieve higher levels of protection for audience members. Facial recognition cameras could have the capacity to instantly recognize and identify people of interest to law enforcement and the intelligence community, providing officials with the capacity to dispatch to areas where suspicious individuals pose a threat to safety. As a result, technology companies specializing in layering AI on top of surveillance equipment have quickly grown alongside demand for this technology.

Artificial intelligence technology is capable of identifying and separating an array of various pictures or videos into groups. While this technology is highly imperfect and will likely require significantly more development, it has the potential to completely alter violent attacks, particularly events such as mass shootings. AI technology like this already exists in certain applications, as in the case of the company ZeroEyes, which aims to implement AI technology to identify weapons in places such as schools and defense sites.[12] If this technology continues to develop and achieves the ability to successfully and confidently identify various objects more broadly and at a national level, it could make a significant impact in hindering violent attacks, shootings, and gatherings—ultimately saving lives and protecting Americans from senseless violence. AI-driven security cameras can theoretically identify guns and explosives and notify not only law enforcement, but the people occupying the surveyed space. Had this technology been employed in events such as the Paris attacks, or at any of the mass shootings within the U.S., it could have given civilians and law enforcement precious time to avoid violence.

Implementation Challenges

There are a number of challenges with large-scale AI implementation, especially with sensitive applications like facial recognition or public area surveillance. The predominant controversy that makes AI difficult to use in counterterrorism efforts focuses on the need to ensure that the technology is being implemented constitutionally, ethically, and legally. AI-driven security cameras installed in busy public places often run into human rights and constitutional issues, as seen in the case of Joy Buolamwini, a Ghanaian American AI researcher and scientist who was misidentified by AI facial recognition analysis until she put on a white mask.[13] Aside from human rights violations and underdeveloped and racially biased systems, there could be constitutional implications as well. For example, if large-scale AI-driven security cameras were used for weapons identification, the effort could be rendered significantly useless in states that permit open carry of firearms. In these cases, it would be challenging for AI technology to distinguish a legal gun owner from a bad actor wielding a weapon. Terrorists could still be effective in carrying out plots in regions where open carry laws are in place. Therefore, it would likely be very challenging for policy experts and technologists alike to create a system of utilizing AI-enhanced security cameras without discriminating against or infringing on citizens’ rights. AI-enhanced surveillance is made much more difficult and constitutionally ambiguous because of the presence of the Second Amendment in the United States, in contrast with countries with much stricter gun laws, such as in the U.K. or Japan.[14]

Another highly problematic aspect of successful use of artificial intelligence is the possibility for latent, discriminatory bias to affect AI algorithms’ predictions. Prejudicial bias may arise both incidentally—usually resulting from imbalanced, inequitable training data—and intentionally. This predicament is demonstrated in the case of the Chinese government using facial recognition technology to distinguish the minority Uighur population in China from the majority Han Chinese.[15] The Uighurs, who identify as Muslim, hold a number of cultural differences from the Han, which has led to conflict with the Communist Party for many years. Because of this tension, the Chinese government has aimed to oppress the Uighurs, going as far as committing human rights violations against the minority population. Reports state that China is detaining between one and two million Uighurs in detention camps across Xinjiang.[16] President Xi Jinping has approved the use of facial recognition technology to identify and track Uighurs across China, using racial profiling to identify individuals regardless of if they have committed any crimes. Deep learning algorithms have enabled mass surveillance cameras to identify Uighurs based on their unique facial features, skin tone, physical build and other arbitrary characteristics. Aside from being racist, these policies used by the Chinese government have further ramifications. The private sector and even individuals have received Chinese government-endorsed blowback when speaking out against China’s treatment of the Uighurs. Companies like H&M and Nike recently drew attention for expressing disapproval of China for their racist practices against the Uighur community. In response, China likely helped launch a disinformation campaign against these companies, further demonstrating how unethical AI profiling and application can lead to layered negative effects that target more than just the members of the profiled community.

China is not the only cause for concerns related to AI biases. Ultimately, artificial intelligence acts are determined by its programming and its training data. It is therefore critical to ensure that those who are responsible for the creation of AI are not implementing inherent biases into the systems. It is also critical to ensure that latent biases, or biases that can develop as the system grows and gains more data, are entirely avoided in the development of AI systems.[17] For example, in an adaptive AI model, biases can develop over time as disparities in data are collected by the system. Latent biases like this should be considered by technologists and data scientists alike in their research and development of various AI systems.

The challenge of AI ethics became even more salient when Google controversially fired one of its AI researchers, Dr. Timnit Gebru. Gebru, a black woman and professional AI ethicist, encompassed the challenge of AI biases in a Facebook post, writing, “I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community — especially with the current hype and demand for people in the field. The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.” At Google, Gebru conducted research that made one thing very clear—artificial intelligence research and development needed to be diversified from a predominantly white and male workforce in order to prevent the harmful biases that Gebru already saw coming into action. According to Gebru, her termination from Google came shortly after she had both criticized the company’s approach to diversifying their workforce and published a research paper focusing on the detrimental biases that existed in Google’s AI systems.[18] 

Google, as one of the leading technology companies in the world, is responsible for ensuring that its new products and technologies, including its developing AI capabilities, minimize harmful biases. It is critical that leading artificial intelligence companies and experts ensure that this technology with significant potential to improve the safety and security of Americans across the country does not end up resulting in harm or discrimination against any innocent individuals. Gebru’s work on combating AI biases is crucial for the safety of all people, and her termination from Google represents a much larger problem as it pertains to diversity, inclusion, and bias.

It is easy for artificial intelligence to fall victim to both intentional and inadvertent prejudicial biases. Implementing facial recognition technology to combat domestic terrorism in the United States would require thorough deep learning algorithms and coding in order to ensure a lack of biases based on physical features. Aside from protecting innocent Americans from racial profiling, the potential for other countries to obtain the same technologies with the intention of implementing intentionally harmful discriminatory systems is large. This scenario must be avoided as racially oriented models of governance would ultimately pose a significant threat to not only the United States, but innocent lives globally.

Artificial intelligence poses a dual-use challenge. While it certainly has the potential to significantly improve counterterrorism efforts, it inherently brings mass surveillance to reality. Mass surveillance is highly problematic and should therefore be starkly avoided. It is critical for both national and international security that mass surveillance be avoided. If the U.S. wishes to influence the norms and ethics of global artificial intelligence usage, it must promote democratic practices and steer away from mass surveillance methods. However, if well-regulated, this technology will ultimately provide the ability to further protect citizens’ lives, while assisting in upholding the law and protecting against bias and discrimination.

Conclusion

The attack on the Capitol on January 6 set a disturbing precedent for the United States. Although facial recognition technology was used to identify dangerous insurrectionists,[19] it could have been used more strategically to have a deterrent effect against the attackers. Predictive tactics could have been employed to avoid the severity of the event altogether, let alone used more broadly afterward to identify and locate suspects. Artificial intelligence could have made a very significant impact on that and many other deadly events.

As the technology continues to develop, significant applications for the security and law enforcement communities will become more apparent and accessible. Despite the fact that artificial intelligence development may pose some significant challenges, ultimately the benefits of the security implications outweigh the potential consequences. That being said, it is critical that artificial intelligence experts and policymakers alike take the challenges of AI implementation seriously. AI-driven prejudicial biases are highly problematic and have the potential to be incredibly harmful to minority communities. It is therefore essential that before AI is used to successfully improve counterterrorism strategies, that the implication of AI ethics are seriously considered. While AI is certain to provide beneficial tools to law enforcement and the intelligence community in combating domestic terrorism, the development of such strategies should be conducted slowly and carefully to ensure that any unethical implications of the technology will not come to harm innocent civilians, particularly within minority communities.

It is not a matter of if artificial intelligence will play a more crucial role in our day to day lives, and more specifically, in the intelligence and security domains. Almost certainly, AI will continue to make inroads in security fields and will continue to be rolled out by government agencies and private companies. Artificial intelligence is bound to make drastic changes to many areas of life, and it is therefore critical to assess and analyze the threats and possibilities it might have on the country. Ultimately, artificial intelligence will prove to be an incredibly useful tool in prevention strategies and counterterrorism tactics, but it must be fully analyzed politically and technologically in order to ensure maximum safety, effectiveness, and lawfulness.


[1] https://www.washingtonpost.com/politics/2021/01/28/who-could-have-predicted-capitol-siege-plenty-people/

[2] https://www.mckinsey.com/~/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx

[3] https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/

[4] https://www.annualreviews.org/doi/pdf/10.1146/annurev-polisci-032211-221825

[5] https://www.brookings.edu/research/what-is-artificial-intelligence/

[6] https://www.darpa.mil/work-with-us/ai-next-campaign

[7] https://www.forbes.com/sites/willemsundbladeurope/2018/10/18/data-is-the-foundation-for-artificial-intelligence-and-machine-learning/?sh=7b7325c451b4

[8] https://www.chathamhouse.org/sites/default/files/2019-08-07-AICounterterrorism.pdf

[9] https://www.chathamhouse.org/sites/default/files/2019-08-07-AICounterterrorism.pdf

[10] https://www.chathamhouse.org/2019/08/artificial-intelligence-prediction-and-counterterrorism

[11] https://www.bbc.com/news/world-europe-34818994

[12] https://zeroeyes.com

[13] https://alltogether.swe.org/2020/01/cameras-everywhere-examining-the-conflict-between-technology-and-human-rights/

[14] https://www.theguardian.com/us-news/2016/mar/15/so-america-this-is-how-you-do-gun-control

[15] T-MINUS AI

[16] T-MINUS AI

[17] https://academic.oup.com/jamia/article/27/12/2020/5859726

[18] https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html

[19] https://www.reuters.com/world/us/us-lawmakers-aim-curtail-face-recognition-even-technology-ids-capitol-attackers-2021-01-18/


 

Our work is made possible by research grants and gifts from supporters. We appreciate your generosity.

Donate Today

Stay up to date on CTEC’s activities!

Join Our Newsletter

Open positions at CTEC are advertised through the Middlebury Institute’s employment opportunities Handshake.

Current Openings