Sun. Dec 22nd, 2024
Adopting Safe Design Principles

OpenAI collaborates with industry leaders such as Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, and Stability AI to implement robust child safety measures in the development, deployment, and maintenance of generative AI technology. I’ve been working on this. Principles of Design by Safety. This initiative will be led by thorna nonprofit organization dedicated to protecting children from sexual abuse, and All technology is created by humansis an organization dedicated to tackling complex problems in technology and society, aiming to reduce the risks generative AI poses to children. OpenAI and its allies employ comprehensive Safety by Design principles to ensure child safety is prioritized at every stage of AI development. So far, we have minimized the potential for our models to produce content that is harmful to children, set age restrictions on ChatGPT, and collaborated with the National Center for Missing and Exploited Children (NCMEC), the Technology Coalition, and We have made significant efforts to actively work with other governments. Consult with industry stakeholders on child protection issues and strengthening reporting mechanisms.

As part of this Safety by Design commitment, we are committed to:

  1. Develop: Develop, build, and train generative AI models that proactively address child safety risks.

    • Responsibly source training datasets, detect and remove child sexual abuse material (CSAM) and child sexual exploitation material (CSEM) from training data, and report confirmed CSAM to relevant authorities.

    • Incorporate feedback loops and iterative stress testing strategies into your development process.

    • Deploy solutions to address hostile exploits.
  2. expand: Generative AI models are trained and evaluated for child safety before being released and distributed, providing protection throughout the process.

    • Combat and respond to abusive content and behavior and incorporate prevention efforts.

    • Encourages developer ownership through secure design.
  3. maintain: We keep our models and platforms safe by proactively understanding and continuing to respond to child safety risks.

    • We are committed to removing new AIG-CSAMs generated by malicious actors from our platform.

    • Invest in research and future technology solutions.
    • Fight against CSAM, AIG-CSAM, and CSEM on our platform.

This effort is an important step toward preventing the misuse of AI technology to create or disseminate child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. As part of the working group, we also agreed to release an annual progress update.