Sam Altman: ChatGPT Chats Aren’t Legally Private – OpenAI Can Be Forced to Disclose Them

by Jason Scott
0 comments


Disclaimer: This article is for informational purposes only and does not constitute financial advice. BitPinas has no commercial relationship with any mentioned entity unless otherwise stated.

📬 Get the biggest crypto stories in the Philippines and Southeast Asia every week — subscribe to the BitPinas Newsletter.

ChatGPT users may reconsider that the AI tool is not the best suit so far for emotional support, health advice, or even mental health therapy. 

This was the implicit statement of Sam Altman, the CEO of ChatGPT developer OpenAI, as he guested during the 599th episode of a podcast titled “This Past Weekend with Theo Von.”


Sam Altman | This Past Weekend w/ Theo Von #599

“People use ChatGPT, especially young people, as a therapist or a life coach. And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there is a legal privilege for it. There’s doctor-patient confidentiality, there is legal confidentiality, whatever. And we have not figured that out yet for when you talk to ChatGPT.”

Sam Altman, Chief Executive Officer, OpenAI

Because of this, Altman admitted that OpenAI could be legally required to produce sensitive information and documents shared with the AI tool. This means that if a ChatGPT user told the AI tool their sensitive information or condition, and a lawsuit happened, then OpenAI would be legally required to produce those conversations.

“I think that’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever—and no one had to think about that even a year ago.”

Sam Altman, Chief Executive Officer, OpenAI

Advertisement

PDAX Banner

Thus, the OpenAI chief reopened the topic of AI tools still having no legal or policy framework, suggesting that there should be the same concept of privacy for AI conversations as exists with therapists or doctors. 

However, he expressed that as more people are using AI, national governments must step in to make sure that these tools are not used for terrorism and other fraudulent activities. 

“I am worried that the more AI in the world we have, the more surveillance the world is going to want. History is that the government takes that way too far, and I’m really nervous about that.”

Sam Altman, Chief Executive Officer, OpenAI

Recent ChatGPT News

Earlier this year, OpenAI introduced “Tasks,” a ChatGPT feature that lets users ask the AI tool to do things for them at a future time.

The ChatGPT developer was also rumored to introduce a web browser to compete with Alphabet’s Google Chrome and will feature a native chat interface and allow for the integration of AI agents. 

In the Philippines, there were ChatGPT accounts from the country that were closed by OpenAI after using the AI tool to generate coordinated political content promoting President Ferdinand Marcos Jr. and criticizing Vice President Sara Duterte.

This article is published on BitPinas: Sam Altman: ChatGPT Conversations Aren’t Legally Protected

What else is happening in Crypto Philippines and beyond?



Source link

Related Posts

Leave a Comment