OpenAI has unveiled new privacy controls for its ChatGPT artificial intelligence (A.I.) platform.
The company is now letting people withhold their ChatGPT conversations from use in training ChatGPT and other A.I. models being developed by OpenAI.
The move provides a new level of privacy for consumers who often share sensitive personal information with the popular A.I. chatbot.
OpenAI says that ChatGPT users can now turn off their chat histories by clicking a switch in their account settings. That move will ensure their conversations are not saved in ChatGPT’s history sidebar.
Privately held OpenAI and its largest backer, Microsoft (MSFT), are trying to make people feel more comfortable using the ChatGPT platform.
Since ChatGPT was launched last November, millions of people have experimented with it to help write essays, plan vacations, and get medical advice, raising questions about how A.I. systems can and should be used.
OpenAI has previously said that its software works to filter out the personally identifiable information of its users.
However, the San Francisco-based company said that it will continue to train it’s A.I. models on user data and will still store people’s data for 30 days before deleting it.
OpenAI also said it’s allowing users to email themselves a downloadable copy of the data they’ve produced while using ChatGPT, which includes conversations on the A.I. platform.
OpenAI plans to launch a business subscription plan for ChatGPT in the coming months.