Description: The rapid development of Artificial Intelligence (AI) in warfare, with global military expenditure reaching $2443 billion, has raised serious ethical concerns regarding autonomous weapons systems (AWS). While some argue for increased efficiency and reduced casualties, the potential for civilian harm and the lack of human oversight remains paramount. This workshop explores the paradoxical possibility that AI could also be a force for good, upholding international law and human rights during conflict. We will examine: AI and Adherence to Law: Can AI be programmed to understand and adhere to the complexities of international law governing warfare? AI for Monitoring Violations & War Crime Investigations: Can AI-powered tools be used to monitor potential human rights violations during conflict, identifying patterns and gathering evidence for investigations? This session will bring together diverse stakeholders (governments, military, legal experts, civil society) for a multi-faceted discussion, fostering innovative approaches to: Responsible AI Development: Promote best practices and international collaboration for responsible development and deployment of AI in the military sphere. Ethical Frameworks: Identify legal and ethical frameworks needed to ensure transparency, accountability, and minimize risks associated with AI in warfare. By fostering a solution-oriented dialogue, this workshop aims to pave the way for a future where AI can serve as a tool for upholding IHL and protecting human rights amidst the complexities of autonomous warfare.