Our Featured Publications
We encourage you to explore our publications and the individual contributions of our team members to gain deeper insights into our collective work. Our team's research continues to pursue knowledge and inspire safe directions in the field.
Strategic Insights from Simulation Gaming of AI Race Dynamics
AI Future: Insights from 43 Intelligence Rising Games
Ross Gruetzemacher, Shahar Avin, James Fox, Alexander K Saeri 2024
Based on insights from 43 facilitated games over four years, we identify key patterns and strategies shaping AI development. Our analysis highlights the destabilizing effects of AI races, the vital role of international cooperation, and challenges in aligning corporate and national interests. We examine how gameplay reveals complexities in AI governance, including cybersecurity risks, fragile agreements, and unforeseen crises that shift AI trajectories. By documenting these insights, we provide foresight for policymakers, industry leaders, and researchers navigating AI’s evolving landscape.
Exploring AI Futures Through Role Play
Methodology
Shahar Avin, Ross Gruetzemacher, James Fox 2024
We present an innovative methodology for studying and teaching the impacts of AI through a role-play game. The game serves two primary purposes: 1) training AI developers and AI policy professionals to reflect on and prepare for future social and ethical challenges related to AI and 2) exploring possible futures involving AI technology development, deployment, social impacts, and governance. The game presented here has undergone two years of development and has been tested through over 30 events involving between 3 and 70 participants. The game is still evolving, but early findings show role-play is a valuable tool for exploring AI futures. It helps individuals and organizations reflect on AI’s impact and avoid strategic mistakes.
Our Facilitator’s Individual Contributions
Our team’s research extends beyond the work they do within our organisation.
Through collaborations with experts across various fields, they contribute to broader conversations on AI development, ethics, and governance.
Explore how their work helps shape the conversation.
SHAHAR AVIN (London, UK)
The Malicious Use of Artificial Intelligence
Forecasting, Prevention, and Mitigation
Brundage, Miles, et al. 2024
This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats.
Tackling threats to informed decision-making in democratic societies
Promoting epistemic security in a technologically-advanced world
Seger, E., Avin, S., Pearson, G., Briers, M., Ó Heigeartaigh, S., Bacon, H., et al. 2020
This report delved into recommendations for additional research and actions to promote epistemic security against potential adversary threats and crises in different scenarios.
Filling gaps in trustworthy development of AI
Incident sharing, auditing, and other concrete mechanisms could help verify the trustworthiness of actors
Avin, et al. 2021
This article explores concrete methods for ensuring AI developers can prevent harm and demonstrate their trustworthiness, to create a more reliable AI ecosystem.
Autonomy and machine learning at the interface of nuclear weapons, computers and people
Avin, S., & Amadae, S. 2019
This essay focuses on higher-order effects: those that stem from the introduction of such technologies into more peripheral systems, with a more indirect (but no less real) effect on nuclear risk. It first describes and categorizes the new threats introduced by these technologies. It then considers policy responses to address these new threats.
Classifying global catastrophic risks
Chapter 3 from An Anthology of Global Risk
Avin, et al. 2018
This chapter presents a novel classification framework for Global Catastrophic Risk scenarios according to a critical system (or systems) affected, the global spread mechanism, and prevention and mitigation failure.
Exploring artificial intelligence futures
Chapter 8 from An Anthology of Global Risk
Shahar Avin. 2019
This chapter illustrates various methodologies for exploring AI futures. Including interdisciplinary and participatory approaches, like role-play scenarios. It sets common terms and outlines contemporary technologies and trends. Its advantages and limitations and suggests strategies to provide better information and expectations.
JAMES FOX (London, UK)
A Causal Model of Theory-of-Mind in AI Agents
J. Foxabbott, J.Foxabbott, R. Subramani, J.Fox, F. Ward. 2024
The dynamics of agency become significantly more complex when autonomous agents interact with other agents and humans, necessitating engagement in theory-of-mind, the ability to reason about the beliefs and intentions of others. We prove the existence of important equilibria.
Reasoning about causality in games
L. Hammond, J. Fox, T. Everitt, R. Carey, A. Abate, M. Wooldridge, 2023
Causal reasoning and game-theoretic reasoning are fundamental topics in artificial intelligence, among many other disciplines: this paper is concerned with their intersection.
It highlights possible applications of causal games, aided by an extensive open-source Python library.
An Analysis and Evaluation of Methods Currently Used to Quantify the Likelihood of Existential Hazards
Chapter 6 from An Anthology of Global Risk
SJ Beard, Thomas Rowe and James Fox. 2020
This chapter evaluates various methods for quantifying existential hazards, finding that no single method is best, though some are more suitable for certain purposes.