The head of trust and safety at privately held technology firm OpenAI has quit.
Dave Willner, who has led the artificial intelligence (A.I.) company’s trust and safety team since February 2022, announced on social media that he is leaving OpenAI.
Willner’s departure comes at a critical time for OpenAI.
Since the company’s A.I. chatbot ChatGPT launched last year and became a sensation, OpenAI has faced growing scrutiny from lawmakers, regulators, and the public over the safety of its technology and its potential impact on society.
OpenAI CEO Sam Altman has publicly called for A.I. to be regulated by governments around the world, saying that chatbots such as ChatGPT can be used to manipulate voters and spread disinformation.
Willner, who previously worked at Meta Platforms (META) and Airbnb (ABNB), said on social media that “OpenAI is going through a high-intensity phase in its development.”
A statement from OpenAI about Willner stepping down said that “his work has been foundational in operationalizing our commitment to the safe and responsible use of our technology.”
OpenAI said that its current Chief Technology Officer, Mira Murati, will become the new head of trust and safety on an interim basis and that Willner will advise the team through the end of 2023.
Willner’s exit comes as OpenAI continues to work with regulators to develop protections around advanced artificial intelligence.
OpenAI has made voluntary commitments meant to make A.I. systems and products safer and more trustworthy.
As part of its pledge, OpenAI has agreed to put new A.I. systems through outside testing before they are released to the public.
OpenAI is a private company. Its stock does not trade on a public stock exchange.
However, Microsoft (MSFT) has invested more than $10 billion U.S. in OpenAI and is integrating it’s generative A.I. technology into its Bing search engine and other products.