Welcome to Minerva funded AI Strategies Grant Project
A Minerva funded grant project at George Mason University on AI Infrastructures and their consequences in global context!
AI Strategies is funded by a $1.389 million grant from the Minerva Research Initiative, a Department of Defense project that takes pride in "sharing social scientific contributions that advance our understanding of the social, cultural, political, economic, and environmental dynamics of security." Together, we’re decoding the complexities of AI governance systems at national and international levels.
​​
MINERVA​
AI Strategies looks at how cultural values and institutional priorities shape artificial intelligence infrastructures in national and global contexts, in order to better understand the effects of comparative AI contexts for security.
​
AI Strategies is funded by a three-year, $1.39 million grant that was awarded to George Mason University to study the economic and cultural determinants for global artificial intelligence (AI) infrastructures—and describe their implications for national and international security. The team began work on the project on April 15th, 2022.
The grant was awarded by the Department of Defense’s esteemed Minerva Research Initiative, a joint program of the Office of Basic Research and the Office of Policy that supports social science research focused on expanding basic understanding of security.
​​
The center will draw expertise from several George Mason colleges, including the Schar School of Policy and Government, the College of Humanities and Social Sciences, and the College of Engineering and Computing.​
Vision​
Our project has created a comprehensive global database of national and international artificial policies and regulations. Using computational methods, our research provides empirical understandings of how cultural, economic, and institutional factors shape the development and deployment of AI infrastructures worldwide. By examining these determinants, we provide context-sensitive outlines of AI ecosystems and their responsiveness to local needs while contributing to global innovation.​
​​​​
Project Framework
​
Our team is at the forefront of exploring the intersection of AI strategies and their global implications for defense and security studies. We have created a comprehensive global AI wardrobe that provides a snapshot of strategies across various sectors, including fiscal policy, education, health, research and development, human rights, and security.
​

Created by Caroline Wesson and Manpria Dua, working with J.P. Singh
Interdisciplinary AI Research​​
​​
This platform allows researchers to compare AI strategies at a glance, facilitating data-driven decisions in areas critical to national security. In addition, we have curated a database of national and subnational AI policies from over 100 countries, making our repository one of the most sophisticated and up-to-date resources in the field. This collection, along with regional and multilateral reports, enables us to provide invaluable insights into global and regional trends in AI governance and regulation.
Looking ahead, we are pioneering a groundbreaking collaboration between social scientists and computer scientists to gain a deeper understanding of how AI strategies are being developed and deployed across different countries. This is the first time that experts from these diverse disciplines are working together to examine the intricacies of global AI infrastructures. Our team is employing a variety of cutting-edge techniques from both fields, including Large Language Models (LLMs), Latent Dirichlet Allocation (LDA), ethnographic methods, and in-depth interviews. This interdisciplinary approach allows us to capture a more holistic view of AI strategies, examining not only the technical frameworks but also the cultural, social, and political factors that influence their design and implementation.
​
By fostering collaboration between academia, policymakers, and multilateral organizations, we strive to build frameworks for AI development that promote fairness, accountability, transparency, and innovation. Ultimately, our goal is to contribute to the creation of AI systems that are aligned with sustainable global development goals, advancing human well-being and fostering a more fair and inclusive digital future.