Description: The conversation surrounding online child safety in the United States has persisted for nearly three decades, with recent momentum driving proposals to establish standards for implementing provider-enforced safety measures. These measures aim to protect society's most vulnerable members, ensure platform accountability, and empower children with default safety and privacy protections. Critics, however, argue that the pursuit of child safety often undermines the imperative to safeguard individual privacy rights. This challenge is stalemated by the widespread adoption of end-to-end encryption technologies, which both self-blind industry and inhibit law enforcement agencies in detecting and addressing harmful content, such as child sexual abuse material (CSAM). This session seeks to examine the intricate balance between privacy concerns and online child safety measures, exploring ethical, legal, technical, and practical challenges. Additionally, Section 230 of the United States’ Communications Decency Act has significantly shaped the online child safety debate by granting broad immunity to internet platforms for user-generated content. While supporters argue this shield promotes unfettered on-line user dialogue and industry innovation, critics allege that it creates a powerful industry disincentive for implementing effective strategies to mitigate CSAM and other clearly harmful or illegal content. This session will explore whether a balance between these competing equities is achievable even with the promise, proferred by some in industry, of AI’s ability to discern harmful user content despite blinding impenetrable encryption.