Register
Harms and Risks of AI in the Military consists of a series of talks, panel discussions, and roundtables at Mila starting on December 2nd 2024 and ending on December 3rd 2024.
The workshop will take place at the Agora hall of Mila, which is located at 6650, Rue Saint-Urbain, Montreal, QC, H2S 3H1.
The times are given in the local Montreal time (Eastern Time, UTC-4).
Please find below our final schedule.
December 2nd
Time (ET) | Session | Speaker | Title |
---|---|---|---|
08:30 - 09:00 | Check-in and breakfast | ||
09:00 - 09:10 | Opening remarks | Organizers | |
09:10 - 09:30 | Welcome address | Yoshua Bengio | |
09:30 - 10:15 | Keynote talk (in-person) | Sarah Grand-Clément |
Artificial Intelligence Beyond Weapons: Application and Impact of AI in the Military DomainWithin the United Nations, the application of artificial intelligence (AI) in the military domain has, to-date, primarily been discussed in the context of the United Nations Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). However, the application of AI within the military domain extends beyond the issue of LAWS as well as beyond applications relating to the use of force and the narrow tasks of target selection and target engagement within the targeting process. This talk will explore the current and near-future AI capabilities that may be applied in the military domain, beyond weapons. It will also examine the strengths and limitations regarding the application of AI to these military tasks, and raise key issues for consideration around the application of AI to the military domain. |
10:15 - 11:00 | Keynote talk (in-person) | Branka Marijan |
The Battle for Control: The Struggle to Regulate Military AI and Autonomous WeaponsThe race to regulate military AI and autonomous weapons is fraught with hurdles. Advanced technologies are being tested in contemporary conflict zones, often bypassing ethical scrutiny in favor of tactical advantage. Technology firms, eager to 'battle test' their technology, wield growing influence over global norms, raising concerns about accountability and oversight. Meanwhile, a lack of political will among states leaves critical guardrails undeveloped, creating a regulatory vacuum. At the core, questions regarding human control over autonomous systems and AI aided decision making remain unanswered. This discussion offers a candid assessment of the challenges facing meaningful global regulation and the urgent need for action. |
11:00 - 11:15 | Coffee break | ||
11:15 - 12:30 | Panel discussion | Yoshua Bengio, Elina Noor, Jessica Dorsey, Wanda Muñoz, Lode Dewaegheneire | Governance of AI applications in the military |
12:30 - 14:00 | Lunch break | ||
14:00 - 14:45 | Keynote talk (in-person) | Jonathan Horowitz |
AI in Armed Conflict: What Could Possibly Go Wrong?Militaries are increasingly finding new and advanced ways to use artificial intelligence in support of their war fighting efforts. The scope of this support might seem endless, but each of the different applications requires an evaluation of the impact it may have on the battlefield for civilians, civilian objects, and others granted the protections of international humanitarian law (IHL), also known as the laws of war. This keynote will present some of the most common military uses of AI in armed conflict today, explore what risks they pose, and describe how the rules and principles of IHL regulate these uses, specifically with respect to cyber and information operations, autonomous weapons systems, and AI military decision support systems. The presentation will include a discuss on the risks that arise when AI systems operate at a speed beyond that of human control and understanding. At the same time, the presentation will also acknowledge that potential uses for AI exist that could help militaries satisfy their IHL obligations. |
14:45 - 15:00 | Contributed talk 1 (in-person) | Mohamed Abdalla |
Understanding Computer Science Students' views of Military (and Military-adjacent) WorkThe increased (potential) adoption of AI by militaries around the world has drawn the attention and raised concerns of both legislators and computer scientists working in industry. However, we do not have a good sense of the views of the field. More specifically, Are computer science students seeking jobs concerned about their labour being used for military purposes or used in military contexts? Are they aware of the working relationships between large US technology companies and mlitiaries around the world? How does this knowledge (or lack thereof) affect their decision to apply to these companies? What would it take to make students reconsider working for companies known to apply for military contracts? We conducted an online survey of computer science students at Canadian universities who are seeking full-time jobs (or recent graduates who have recently obtained their first post-graduation job). Initial results seem to indicate that the majority of students do not particularly privilege the ethics of their labour over other considerations (e.g., remuneration or location). The majority of students were not concerned with their labour being used for military purposes, though this was not the case for all demographic subgroups. For those who were concerned about their labour being used for military purposes, a plurality knew of at least some, if not all, of the military contracts taken by the companies to which they applied. Compared to other ethical concerns (such as environmental impact), students were less concerned by the usage of their work in military contexts (or for military purposes). Understanding students' views to the above questions is vital for a myriad of roles, be it educators looking to study the effectiveness of ethics courses, industry trying to gauge incoming worker sentiments, or military recruiters attempting to understand possible challenges. |
15:00 - 15:15 | Contributed talk 2 (virtual) | Mst Rafia Islam, Azmine Toushik Wasi |
Balancing Power and Ethics: A Framework for Addressing Human Rights Concerns in Military AIAI has made significant strides recently, leading to various applications in both civilian and military sectors. The military sees AI as a solution for developing more effective and faster technologies. While AI offers benefits like improved operational efficiency and precision targeting, it also raises serious ethical and legal concerns, particularly regarding human rights violations. Autonomous weapons that make decisions without human input can threaten the right to life and violate international humanitarian law. To address these issues, we propose a three-stage framework (Design, In Deployment, and During/After Use) for evaluating human rights concerns in the design, deployment, and use of military AI. Each phase includes multiple components that address various concerns specific to that phase, ranging from bias and regulatory issues to violations of International Humanitarian Law. By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights. |
15:15 - 15:30 | Contributed talk 3 (in-person) | Max Lamparth |
Human vs. Machine: Behavioral Differences between Expert Humans and Language Models in Wargame SimulationsTo some, the advent of artificial intelligence (AI) promises better decision-making and increased military effectiveness while reducing the influence of human error and emotions. However, there is still debate about how AI systems, especially large language models (LLMs) that can be applied to many tasks, behave compared to humans in high-stakes military decision-making scenarios with the potential for increased risks towards escalation and unnecessary conflicts. To test this potential and scrutinize the use of LLMs for such purposes, we use a new wargame experiment with 214 national security experts designed to examine crisis escalation in a fictional U.S.-China scenario and compare the behavior of human player teams to LLM-simulated team responses in separate simulations. Wargames have a long history in the development of military strategy and the response of nations to threats or attacks. Here, we find that the LLM-simulated responses can be more aggressive and significantly affected by changes in the scenario. We show a considerable high-level agreement in the LLM and human responses and significant quantitative and qualitative differences in individual actions and strategic tendencies. These differences depend on intrinsic biases in LLMs regarding the appropriate level of violence following strategic instructions, the choice of LLM, and whether the LLMs are tasked to decide for a team of players directly or first to simulate dialog between a team of players. When simulating the dialog, the discussions lack quality and maintain a farcical harmony. The LLM simulations cannot account for human player characteristics, showing no significant difference even for extreme traits, such as “pacifist” or “aggressive sociopath.” When probing behavioral consistency across individual moves of the simulation, the tested LLMs deviated from each other but generally showed somewhat consistent behavior. Our results motivate policymakers to be cautious before granting autonomy or following AI-based strategy recommendations. |
15:30 - 15:45 | Invited talk (in-person) | Racky Ba | Humanity & Inclusion Canada : Mission and Armed Violence Reduction (AVR) actions |
15:45 - 17:00 | Poster session | ||
17:00 - 18:30 | Social and dinner |
December 3rd
Time (ET) | Session | Speaker | Title |
---|---|---|---|
08:45 - 09:15 | Check-in and breakfast | ||
09:15 - 09:45 | Keynote talk (virtual) | Gisela Luján Andrade |
Autonomous Weapons in Latin America: Organized Crime, Gendered Impacts and Activism for RegulationThe proliferation of Autonomous Weapons Systems in the hands of organized crime presents an urgent challenge for Latin America, with significant repercussions for security and social stability. This presentation will explore how these technologies could be exploited by criminal organizations in the region, exacerbating violence—particularly against women and marginalized communities—while reinforcing structural inequalities. It will also highlight the critical role of Latin American civil society in addressing these interconnected risks and advancing pathways toward meaningful regulation. |
09:45 - 10:30 | Keynote talk (in-person) | Paul Lushenko |
Artificial Intelligence, Trust, and Military Decision-Making: Evidence from Survey Experiments in the US MilitaryWhat shapes military attitudes of trust in artificial intelligence (AI)? When used in concert with humans, AI is thought to help militaries maintain lethal overmatch of adversaries on the battlefield as well as optimize leaders' decision-making in the war-room. Despite these sanguine predictions, it is unclear what shapes soldiers' trust in AI, thus encouraging them to overcome skepticism of machines. To inform soldiers' understanding of AI, I explore three important but under-studied research questions. First, will soldiers trust AI used for different—tactical and strategic—purposes and with varying—human and machine—oversight? Second, what factors shape soldiers' trust in AI? Third, how are these trust outcomes complicated, if at all, by generational differences across the military ranks? To investigate these questions, I draw on novel middle-range theories that shape the use of survey experiments among rare and elite samples of the US military. The results suggest that soldiers' trust in AI is not guaranteed. Rather, trust is complex and multidimensional, shaped by underlying technical, operational, and regulatory considerations. These findings, which are consistent across the military ranks, provide the first experimental evidence for military attitudes of trust in AI, which have policy, strategy, and research implications. |
10:30 - 10:45 | Coffee break | ||
10:45 - 12:00 | Panel discussion | Branka Marijan, Sarah Grand-Clément, Maria Vanina Martinez, Aaron Luttman, Andrew W. Reddie | Technical issues and responses to the risks of the use of AI in the military |
12:00 - 13:30 | Lunch break | ||
13:30 - 14:15 | Keynote talk (virtual) | Jacquelyn Schneider |
AI and Strategic StabilityHow will artificial intelligence affect strategic stability - whether states go to war and whether those wars go nuclear? Pundits and practitioners proclaim the revolutionary impact of artificial intelligence for intelligence, targeting, allocation of weapons, and even lethality. However, as AI changes military power, it also has implications for strategic stability. Early warning, nuclear stability, and incentives for first strike are all impacted by how AI is developed, tested, integrated, and applied to military power. What efforts are already being taken by the US military and what can militaries do to ensure that embracing the AI revolution doesn't lead to strategic instability? |
14:15 - 15:30 | Panel discussion | Jonathan Horowitz, Paul Biggar, Laura Nolan | Role of tech workers in AI militarization and disarmament |
15:30 - 15:45 | Coffee break | ||
15:45 - 16:45 | Roundtable discussions | ||
16:45 - 17:00 | Synthesis and key insights | ||
17:00 - 17:15 | Closing remarks |