As always, you can use ChatGPT to manage your data. Chats with GPT are not shared with the builder. If GPT uses a third-party API, choose whether data can be sent to that API. When a builder customizes his own GPT with actions and knowledge, he can choose whether user chat with that GPT can be used to improve and train the model. These choices build on existing privacy controls you have, such as the option to opt out of model training for your entire account.
Google has set up a new system to help review GPT against its usage policies. These systems stack on top of existing mitigations and aim to prevent users from sharing harmful GPTs that contain fraudulent activity, hateful content, and adult themes. We have also taken steps to build user trust by allowing builders to verify their identity. We will continue to monitor and learn how people use GPT, and update and strengthen our safety mitigations. If you have concerns about a particular GPT, you can also notify the team using the reporting feature on the GPT sharing page.
GPT will continue to get more useful and smarter, and eventually you’ll be able to delegate real tasks in the real world to GPT. In the field of AI, these systems are often discussed as “agents.” We believe it is important to move towards this future in stages. This requires careful technical and safety work and time for society to adapt. We are thinking deeply about the impact on society and will share further analysis soon.