Description: Talks about AI have permeated the digital governance and policy space, from the principles and values with which we should steer AI development to which risks are the most urgent to mitigate. We talk a lot about challenges and opportunities and ways to ‘govern AI for humanity’; we tend to believe that new governance frameworks will be the solution we need to leverage AI for good, address the risks, and account for misuse and missed uses of the technology. But there are also broader, perhaps more philosophical questions about AI that we may want to spend a little more time on. For instance, how much time do we take to reflect on what it means to have intelligent machines functioning and working for or alongside us in our society? We’d like to invite you to an open-ended conversation filled with questions. Through collective sense-making, we wish to ground the talk about risks and opportunities that AI brings in human experiences. In this out-of-the-box workshop, we promise not solutions but a set of critical questions that prompt us to clarify the former. The following questions are a primer: Epistemological challenges in knowledge creation: Large Language Models (LLMs) as our new coworker? Analysts? Assistant? What roles do we imagine LLMs play vis-a-vis humans? Missing the forest for the trees: Are there other forms of intelligent machines/agents beyond LLMs we tend to talk so much about? If so, how much are they reflected/considered in our AI policy and governance discussions? Assigning human attributes to AI: What do we talk about when we talk about AI ‘understanding’, ‘reasoning’, etc.? When words lose their meaning: Five years from now, will we all sound like ChatGPT? How will human-machine co-generated language evolve, now depending less on contexts but on tokens associated with probabilities?