Keynote Speakers and Panelists

Register
Dr. Yoshua Bengio

Yoshua Bengio

Founder and Scientific Director · Full Professor · AI Chair · Scientific Director

Mila · Université de Montréal · CIFAR · IVADO

Bio Yoshua Bengio is Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila - Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO. He received a Turing Award for his work in deep learning and is the computer scientist with the highest h-index. He is a Fellow of both the Royal Society of London and Canada, Knight of the Legion of Honor of France, Officer of the Order of Canada, Member of the UN's Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology since 2023 and a Canada CIFAR AI Chair. He chairs the International Scientific Report on the Safety of Advanced AI.


Website
Dr. Branka Marijan

Branka Marijan

Senior Researcher · Board Member

Project Ploughshares · Peace and Conflict Studies Association of Canada

Bio Branka leads the research on the military and security implications of emerging technologies. Her work examines ethical concerns regarding the development of autonomous weapons systems and the impact of artificial intelligence and robotics on security provision and trends in warfare. She holds a PhD from the Balsillie School of International Affairs with a specialization in conflict and security. She has conducted research on post-conflict societies and published academic articles and reports on the impacts of conflict on civilians and diverse issues of security governance, including security sector reform.

The Battle for Control: The Struggle to Regulate Military AI and Autonomous Weapons The race to regulate military AI and autonomous weapons is fraught with hurdles. Advanced technologies are being tested in contemporary conflict zones, often bypassing ethical scrutiny in favor of tactical advantage. Technology firms, eager to 'battle test' their technology, wield growing influence over global norms, raising concerns about accountability and oversight. Meanwhile, a lack of political will among states leaves critical guardrails undeveloped, creating a regulatory vacuum. At the core, questions regarding human control over autonomous systems and AI aided decision making remain unanswered. This discussion offers a candid assessment of the challenges facing meaningful global regulation and the urgent need for action.


Website
Sarah Grand-Clément

Sarah Grand-Clément

Researcher

United Nations Institute for Disarmament Research (UNIDIR)

Bio Sarah Grand-Clément is a Researcher in the Security and Technology Programme and the Conventional Arms and Ammunition Programme of the United Nations Institute for Disarmament Research (UNIDIR). Her work examines the intersection of technology with conventional arms, exploring both the challenges and threats that technology can pose to international security, as well as the benefits it can bring to prevent violent conflict and enable peace. Sarah also has a particular interest in the use of futures methodologies as a way to help explore these complex policy issues, with expertise in horizon scanning, serious gaming and future scenarios. Prior to joining UNIDIR, Sarah was a Senior Analyst working on defence and security policy issues at RAND Europe. She holds an MSc in Arab World Studies from Durham University.

Artificial Intelligence Beyond Weapons: Application and Impact of AI in the Military Domain Within the United Nations, the application of artificial intelligence (AI) in the military domain has, to-date, primarily been discussed in the context of the United Nations Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). However, the application of AI within the military domain extends beyond the issue of LAWS as well as beyond applications relating to the use of force and the narrow tasks of target selection and target engagement within the targeting process. This talk will explore the current and near-future AI capabilities that may be applied in the military domain, beyond weapons. It will also examine the strengths and limitations regarding the application of AI to these military tasks, and raise key issues for consideration around the application of AI to the military domain.


Website
Dr. Elina Noor

Elina Noor

Senior Fellow

Asia Program at Carnegie

Bio Elina Noor focuses on developments in Southeast Asia, particularly the impact and implications of technology in reshaping power dynamics, governance, and nation-building in the region. She currently also serves on the International Committee of the Red Cross' Global Advisory Board on digital threats during conflict. Previously, Elina was director of political-security affairs and deputy director of the Washington, D.C. office at the Asia Society Policy Institute. Prior to that, Elina was an associate professor at the Daniel K. Inouye Asia-Pacific Center for Security Studies in Honolulu. She spent most of her career at the Institute of Strategic and International Studies Malaysia, where she last held the position of director, foreign policy, and security studies. Elina was also formerly with the Brookings Institution's Project on U.S. Relations with the Islamic World.


Website
Jessica Dorsey

Jessica Dorsey

Assistant professor of international law

Utrecht University School of Law

Bio Jessica Dorsey, J.D., LL.M., is an Assistant Professor of International Law and director of the Realities of Algorithmic Warfare project at Utrecht University. She is also an Expert Member of the Global Commission on the Responsible Use of AI in the Military Domain. Her current research focuses on the legitimacy of military targeting operations in light of increasing autonomy in warfare, with a specific focus on the protection of civilians in armed conflict. Jessica is also a renowned legal scholar and practitioner on issues related to the use of force, especially in the context of drone warfare, having worked in the field for more than 15 years. She is an Associate Fellow at the International Centre for Counter-Terrorism, The Hague and the Managing Editor of the international law weblog, Opinio Juris.


Website
Wanda Muñoz

Wanda Muñoz

Member of Feminist AI Research Network

Bio Wanda Muñoz is an international consultant with more than 20 years of experience on humanitarian disarmament and a member of the Latin American Hub of the Feminist Artificial Intelligence Research Network. She has worked with international NGO and United Nations organizations in Latin America, Africa, Europe and Southeast Asia. She has he worked and published on the ethical, legal and humanitarian risks of autonomous weapons systems for UNESCO, MILA, the ICRC and others. As an independent expert appointed by the Ministry of Foreign Affairs of Mexico, she contributed her experience on policies and programs on gender equality, diversity and human rights to the Global Alliance on Artificial Intelligence. She is a member of UNESCO's Women for Ethics in AI Network and has co-organised multiple regional and international conferences, workshops and panels with UNESCO, UNFPA, UNOPS and others across the globe on different aspects of AI, humanitarian disarmament and assistance to victims of war.


Website
Gisela Luján Andrade

Gisela Luján Andrade

Founder

Perú por el Desarme

Bio Gisela Luján Andrade is a political advocacy consultant with over 15 years of experience in humanitarian disarmament, human rights, and political communication. Her expertise encompasses analysis, advocacy, and research on social movements, AWS and their humanitarian and social impact on civilians. She is the founder of 'Perú por el Desarme', a civil association promoting awareness of the humanitarian risks of autonomous weapons systems and fostering a culture of peace. Gisela has contributed to international and regional processes related to landmines, cluster munitions, nuclear weapons and AWS. She has published on topics related to emerging military technologies, activism, feminist self-care, AWS regulation, and AI governance. Gisela serves as the representative in Peru of SEHLAC (Latin American and Caribbean Human Security Network) and is a member of the global coalition Stop Killer Robots. Gisela holds two master's degrees in political science from the Pontifical Catholic University of Peru and the Panthéon Sorbonne University and has pursued doctoral studies in sociology at the École des Hautes Études en Sciences Sociales (EHESS).

Autonomous Weapons in Latin America: Organized Crime, Gendered Impacts and Activism for Regulation The proliferation of Autonomous Weapons Systems in the hands of organized crime presents an urgent challenge for Latin America, with significant repercussions for security and social stability. This presentation will explore how these technologies could be exploited by criminal organizations in the region, exacerbating violence—particularly against women and marginalized communities—while reinforcing structural inequalities. It will also highlight the critical role of Latin American civil society in addressing these interconnected risks and advancing pathways toward meaningful regulation.


Website
Jonathan Horowitz

Jonathan Horowitz

deputy head of the legal department

International Committee of the Red Cross (ICRC)

Bio Jonathan is the deputy head of the legal department to the ICRC's Delegation for the United States and Canada, based in Washington, D.C. He focuses on legal issues relating to urban warfare, partnered military operations, and new and emerging technologies in armed conflict, including information operations. In this role, he engages with the U.S. and Canadian governments, as well as private technology companies and others. His latest publication is “One Click from Conflict: Some Legal Considerations Related to Technology Companies Providing Digital Services in Situations of Armed Conflict” (Chicago Journal of International Law, Vol. 24.2).

AI in Armed Conflict: What Could Possibly Go Wrong? Militaries are increasingly finding new and advanced ways to use artificial intelligence in support of their war fighting efforts. The scope of this support might seem endless, but each of the different applications requires an evaluation of the impact it may have on the battlefield for civilians, civilian objects, and others granted the protections of international humanitarian law (IHL), also known as the laws of war. This keynote will present some of the most common military uses of AI in armed conflict today, explore what risks they pose, and describe how the rules and principles of IHL regulate these uses, specifically with respect to cyber and information operations, autonomous weapons systems, and AI military decision support systems. The presentation will include a discuss on the risks that arise when AI systems operate at a speed beyond that of human control and understanding. At the same time, the presentation will also acknowledge that potential uses for AI exist that could help militaries satisfy their IHL obligations.


Website
Dr. Maria Vanina Martinez

Maria Vanina Martinez

Tenured Scientist

IIIA-CSIC

Bio Vanina's research focus is in the area of knowledge representation and reasoning KR&R (a subdiscipline of AI), with a focus on the formalization of knowledge dynamics, the management of inconsistent and uncertainty information and on the study of the ethical and social impact on the development and use of systems based on Artificial Intelligence. She is currently a tenured scientist at the department of Logic and Reasoning at the IIIA-CSIC. Since November 2023 she is a member of the UN Secretary-General's Advisory Body on AI.


Website
Paul Biggar

Paul Biggar

Founder

Tech for Palestine

Bio Paul Biggar leads Tech for Palestine, a collaboration of tech projects advocating for Palestine. At its core, Tech for Palestine is an incubator to start and support projects for Palestine with mentorship, tech volunteers, marketing support, and connections to the broad Palestine advocacy community. He previously founded tech startups Darklang and CircleCI, as well as a Y Combinator-backed startup. He graduated with a PhD in Computer Science from Trinity College Dublin in 2010.


Website
Laura Nolan

Laura Nolan

Software Engineer · Volunteer

Freelancer · Stop Killer Robots

Bio Laura Nolan is a software engineer in Dublin, Ireland and a volunteer with the Stop Killer Robots coalition. She worked at Google for nine years as a staff site reliability engineer. In 2018, she walked out over Google's involvement in Project Maven, a US military project to use AI to analyze drone surveillance footage.


Website
Jacquelyn Schneider

Jacquelyn Schneider

Hargrove Hoover Fellow

Stanford University · Hoover Wargaming and Crisis Simulation Initiative

Bio Jacquelyn Schneider is the Hargrove Hoover Fellow at the Hoover Institution, the Director of the Hoover Wargaming and Crisis Simulation Initiative, and an affiliate with Stanford's Center for International Security and Cooperation. Her research focuses on the intersection of technology, national security, and political psychology with a special interest in cybersecurity, autonomous technologies, wargames, and Northeast Asia. She was previously an Assistant Professor at the Naval War College as well as a senior policy advisor to the Cyberspace Solarium Commission.

AI and Strategic Stability How will artificial intelligence affect strategic stability - whether states go to war and whether those wars go nuclear? Pundits and practitioners proclaim the revolutionary impact of artificial intelligence for intelligence, targeting, allocation of weapons, and even lethality. However, as AI changes military power, it also has implications for strategic stability. Early warning, nuclear stability, and incentives for first strike are all impacted by how AI is developed, tested, integrated, and applied to military power. What efforts are already being taken by the US military and what can militaries do to ensure that embracing the AI revolution doesn't lead to strategic instability?


Website
Lieutenant Colonel Paul Lushenko

Paul Lushenko

Lieutenant Colonel · Assistant Professor

US Army · US Army War College

Bio Paul Lushenko is a Lieutenant Colonel in the US Army, an Assistant Professor at the US Army War College, and Strategist for the Joint Counter-Small Unmanned Aircraft Systems Office. He is also a Professorial Lecturer at George Washington University's Elliott School of International Affairs, Council on Foreign Relations Term Member, Senior Fellow at Cornell University's Tech Policy Institute and Institute of Politics and Global Affairs, and Non-Resident Expert at RegulatingAI. His work lies at the intersection of emerging technologies, politics, and national security, and he also researches the implications of great power competition for regional and global order-building. Paul is the author and editor of three books, including Drones and Global Order: Implications of Remote Warfare for International Society (2022), The Legitimacy of Drone Warfare: Evaluating Public Perceptions (2024), and Afghanistan and International Relations (under contract). Paul has written extensively on emerging technologies and war, publishing in academic journals, policy journals, and media outlets such as Security Studies, Foreign Affairs, and The Washington Post. He earned his Ph.D. and M.A. in International Relations from Cornell University. He also holds an M.A. in Defense and Strategic Studies from the U.S. Naval War College, an M.A. in International Relations and a Master of Diplomacy from The Australian National University, and a B.S. from the U.S. Military Academy.

Artificial Intelligence, Trust, and Military Decision-Making: Evidence from Survey Experiments in the US Military What shapes military attitudes of trust in artificial intelligence (AI)? When used in concert with humans, AI is thought to help militaries maintain lethal overmatch of adversaries on the battlefield as well as optimize leaders' decision-making in the war-room. Despite these sanguine predictions, it is unclear what shapes soldiers' trust in AI, thus encouraging them to overcome skepticism of machines. To inform soldiers' understanding of AI, I explore three important but under-studied research questions. First, will soldiers trust AI used for different—tactical and strategic—purposes and with varying—human and machine—oversight? Second, what factors shape soldiers' trust in AI? Third, how are these trust outcomes complicated, if at all, by generational differences across the military ranks? To investigate these questions, I draw on novel middle-range theories that shape the use of survey experiments among rare and elite samples of the US military. The results suggest that soldiers' trust in AI is not guaranteed. Rather, trust is complex and multidimensional, shaped by underlying technical, operational, and regulatory considerations. These findings, which are consistent across the military ranks, provide the first experimental evidence for military attitudes of trust in AI, which have policy, strategy, and research implications.


Aaron Luttman

Aaron Luttman

Senior Technical Advisor

Pacific Northwest National Laboratory

Bio Aaron Luttman is a researcher at Pacific Northwest National Laboratory – a multi-mission US Department of Energy R&D facility – where he focuses on bringing artificial intelligence and other emerging technologies to US national security missions. A mathematician by training, he has published over 40 journal articles on pure and applied mathematics and given over 100 research presentations, including as a Society of Industrial and Applied Mathematics Visiting Lecturer and a Mathematical Association of America Distinguished Lecturer. Dr. Luttman’s primary technical focus is on AI Assurance, including safety, security, robustness, and vulnerabilities of AI models. He served as a co-organizer of the US National Academies workshop “AI and Justified Confidence,” designed to assess the challenges of bringing AI to Army command and control, and he is also on the faculty at Montana State University, where he teaches “Data Science for National Security.”


Website
Prof. Andrew W. Reddie

Andrew W. Reddie

Associate Research Professor & Faculty Director, BRSL

Berkeley's Goldman School of Public Policy · University of California · Berkeley Risk and Security Lab

Bio Andrew W. Reddie is the founder of the Berkeley Risk and Security Lab (BRSL). His research at the intersection of technology, politics, and security examines how emerging military capabilities shape international order—with a focus on nuclear weapons policy, cybersecurity, AI governance, and innovation. He is also a pioneer of the use of wargaming methods in both classroom and experimental settings. Andrew serves in faculty leadership roles at UC Berkeley's Center for Security in Politics, the Berkeley APEC Study Center, and UC-wide Disaster Resilience Network. He is also an affiliate of UC Berkeley's Institute of International Studies and the University of California's Institute on Global Conflict and Cooperation. Andrew received his B.A. and M.A. degrees from the University of California, Berkeley, an M.Phil. in International Relations from Oxford University and his Ph.D. from the University of California, Berkeley in 2019.


Website
Lode Dewaegheneire

Lode Dewaegheneire

Fellow and Independent Expert

University of Liège (Belgium)

Bio Lode Dewaegheneire's field of research is Autonomous Weapon Systems, more particularly as military advisor with Mines Action Canada and military and parliamentarian outreach consultant with the Campaign to Stop Killer Robots. Previously he was focusing on transparency in disarmament and its impact on the implementation of disarmament and arms control treaties. Other fields of research are (EU) dual-use export control regime and the Arms Trade Treaty (ATT). Dewaegheneire has been working in humanitarian disarmament for many years and was Coordinator for Article 7 Reporting in both the Anti-personal Mine Ban Convention and the Convention on Cluster Munitions. He also worked for three years in the Verification Division of the OPCW where he was responsible for the yearly Verification Implementation Report.


Website