
Duality AI and CoVar Partner on DARPA’s ASIMOV Program to Shape Ethical Standards for Military Autonomous Systems
Duality AI, the company behind the advanced digital twin simulation platform Falcon, has announced a strategic partnership with CoVar, a leader in responsible AI and machine learning solutions, to support the Defense Advanced Research Projects Agency’s (DARPA) new initiative: the Autonomy Standards and Ideals with Military Operational Values (ASIMOV) program.
This ambitious and timely effort by DARPA seeks to establish a foundational framework for evaluating whether autonomous systems used in defense contexts align with U.S. military values. These values include adherence to the laws of armed conflict, military ethics, and the commander’s intent—an important concept in military operations that emphasizes decentralized decision-making aligned with mission objectives.
As artificial intelligence and autonomous systems become more sophisticated and begin to take on greater roles in complex decision-making, both in and outside the battlefield, concerns about their ethical and legal implications have grown significantly. The ASIMOV program responds to this growing need by aiming to define robust, quantitative standards for ethical alignment and performance in autonomous systems. This is particularly urgent as these systems are no longer confined to research labs—they are rapidly approaching operational readiness and real-world deployment across defense applications.
Building an Ethical Evaluation Framework for AI in Combat
At the heart of the ASIMOV program is the goal of quantifying ethical decision-making in autonomous systems. Rather than relying solely on post-hoc evaluation or theoretical assumptions, DARPA is calling for a scientifically rigorous framework that can provide repeatable, data-driven insight into how autonomous systems handle ethically sensitive scenarios—especially in dynamic and high-stakes environments like military operations.
To achieve this, ASIMOV is supported by an Ethical, Legal, and Social Implications (ELSI) advisory group, which brings together military leaders, ethicists, engineers, and legal experts. Their collective mission is to create a shared vocabulary and set of tools that will allow developers, researchers, and military stakeholders to measure and discuss the ethical performance of AI systems in clear, actionable terms.
This is where the partnership between Duality AI and CoVar becomes essential. As part of CoVar’s multidisciplinary team, Duality AI is contributing to the development of GEARS—short for Gauging Ethical Autonomous Reliable Systems—an ethical testing infrastructure specifically designed for autonomous systems.
Falcon and GEARS: Merging Simulation and Ethics
Duality AI’s flagship platform, Falcon, plays a crucial role in the GEARS testing framework. Falcon is known for its ability to create high-fidelity digital twins—virtual models of real-world environments, systems, and agents, including vehicles and human actors. These models allow for accurate simulation of real-world conditions, which is vital for testing the behavior of AI in ethically complex situations.
Falcon’s integration with autonomous system software enables in-the-loop simulations, where real-time synthetic data streams from multi-modal virtual sensors—including electro-optical, infrared (IR), LiDAR, and RADAR—feed directly into the AI being tested. This capability allows researchers to observe how AI systems perceive their environment, process information, and make decisions under conditions that mimic real-world challenges.
Moreover, Falcon can procedurally generate a wide variety of scenarios using diverse input sources, including structured knowledge graphs that represent ethical dilemmas, commander’s intent, and military rules of engagement. This procedural generation enables researchers to efficiently simulate thousands of distinct yet relevant operational scenarios, speeding up the evaluation process and enhancing the robustness of the framework.
Dr. Pete Torrione, CTO of CoVar, highlights Falcon’s critical role in the program:
“Falcon will let us rapidly create and simulate a large variety of required ethical scenarios. With GEARS, we are defining a new mathematics of ethics, where ethical scenarios and commander’s intent are represented by knowledge graphs. Falcon’s capability to ingest these graphs and procedurally generate simulation-ready scenarios is vital for a framework designed to evaluate the ethical readiness of an autonomous system.”
This “mathematics of ethics” refers to the use of structured representations of knowledge—such as semantic graphs and formal logic—to model and measure ethical behavior, allowing autonomous systems to be assessed not only on functional accuracy but also on moral alignment with human values.
A Shared Vision for Responsible AI
For Duality AI, this partnership with CoVar aligns perfectly with the company’s longstanding mission: to provide a safe, ethical, and reliable pathway for AI and autonomous systems to transition from development to deployment. Since its inception, Duality has focused on leveraging simulation as a risk-free proving ground for autonomous technologies, enabling developers to iterate, stress-test, and validate their solutions before releasing them into the real world.
Apurva Shah, CEO and co-founder of Duality AI, expressed his enthusiasm for joining the ASIMOV program:
“We’re thrilled to be partnering with CoVar on the ASIMOV program. As a leader in developing responsible AI/ML solutions, CoVar is the ideal partner with whom to advance one of our main goals: helping to transition AI into the real world safely, responsibly, and reliably. During a time of justifiable apprehension about the increasing role of AI in our world, leveraging Falcon towards evolving more ethical AI is a critical endeavor and we’re honored to partner with CoVar and contribute to this important program.”
The ethical concerns surrounding AI—particularly in defense and security applications—are not abstract. These technologies could one day be tasked with decisions about the use of force, distinguishing combatants from civilians, and interpreting complex mission directives with moral nuance. Ensuring that these systems uphold ethical principles consistent with national and international law is essential for maintaining trust and accountability.
A Multidisciplinary Coalition for Ethical Autonomy
The ASIMOV program’s success depends on collaboration across disciplines, and the initiative has attracted a powerful coalition of experts. In addition to engineers and AI researchers, the project team includes professors of ethics, military veterans with command experience, legal scholars, and experts in human-machine teaming.
Together, they are working to translate abstract ethical principles into computational terms—an effort that could influence not just military AI, but also broader applications in law enforcement, autonomous vehicles, healthcare robotics, and beyond.
As these systems become more embedded in daily life, frameworks like GEARS could provide the universal standards needed to ensure that AI behaves consistently with human expectations, across sectors and use cases.
Beyond Defense: A Model for Global AI Ethics
While the ASIMOV program is focused on military applications, its implications are far-reaching. The methods, tools, and insights developed under this initiative could be adapted to shape global standards for ethical AI—from autonomous cars making life-or-death decisions in traffic to drones delivering humanitarian aid in disaster zones.
By combining high-fidelity simulation, procedural scenario generation, ethical modeling, and rigorous data analysis, Duality AI and CoVar are helping lay the groundwork for a future where autonomous systems are not only intelligent but also ethically sound.
As the GEARS framework matures through ASIMOV, its quantitative approach to evaluating ethical decision-making is expected to ripple across the broader AI and autonomy community—informing best practices, regulatory standards, and system design principles for years to come.