We recognize that there are significant risks in generating speech that resembles the voice of the people, and that it is a top priority, especially in an election year. We work with U.S. and international partners in government, media, entertainment, education, and civil society to ensure we incorporate their feedback as we develop.
Partners currently testing Voice Engine have agreed to a usage policy that prohibits impersonating another person or entity without their consent or legal right. Additionally, agreements with these partners require explicit and informed consent from the original speaker and do not allow developers to build ways for individual users to create their own voices. Not. The partner must also clearly disclose to viewers that the audio they are hearing is generated by her AI. Finally, we implemented a series of safety measures, including watermarking to track the origin of audio produced by the speech engine and proactive monitoring of audio usage.
We believe that the widespread deployment of synthetic voice technology will include a voice authentication experience that verifies that the original speaker is intentionally adding their voice to a service, as well as a voice authentication experience that detects the creation of voices that are too similar. We believe that it should be accompanied by a list of prohibited voices to prevent this. to a famous person.