Description: In the European Union, laws and regulations concerning the use of Artificial Intelligence (AI) are have been finalised. This milestone marks a significant point in history. The AI Act, the primary legislation in question, introduces regulations for AI applications. However, it does so through a risk-based approach. This approach has implications beyond the EU's borders as the AI Act applies to any organization utilizing AI products within the EU. Given the widespread presence of non-European AI suppliers in the EU market, these entities must also comply with the AI Act based on their product's risk level. Consequently, affected organizations must establish an AI governance framework to meet these legal obligations. The AI Act delineates between various roles in meeting its requirements, including users and providers of AI systems. However, its impact extends to other stakeholders such as civil society, regulators, auditors, and lawmakers outside the EU. The complexity of the law stems from its risk-based approach and the diverse roles it addresses, posing challenges in implementation, enforcement, and evaluating effectiveness. This complexity is anticipated to have a significant global impact on organizations, potentially establishing a new standard for AI governance worldwide. The objective of the session is to discuss with participants the necessary steps to: • Develop a standardized AI governance framework to mitigate AI-related risks for organizations. • Foster global adoption by promoting interoperability among regulatory regimes worldwide. • Ensure the legislation remains adaptable to future advancements in AI technology. Participants will include lawmakers, AI developers, civil society organizations, and auditors, facilitating a comprehensive exploration of the topic.