The Gold Report

The Gold Report

ChatGPT Doesn’t Know Right From Wrong. Do You?

Sam Altman has a formula for ChatGPT's moral compass. It's not good.

Dr. Simone Gold's avatar
Dr. Simone Gold
Nov 20, 2025
∙ Paid

In a recent and eye-opening interview, Tucker Carlson asked OpenAI CEO Sam Altman what moral framework the company uses for ChatGPT. “What is right or wrong according to ChatGPT?” Tucker asked.

Altman replied: “What I think ChatGPT should do is reflect that weighted average or whatever of humanity’s moral view which will evolve over time. And we are here to serve our users. We’re here to serve people. This is a technological tool for people.” He added that his role was “to make sure that we are accurately reflecting the preferences of humanity—or for now, of our user base—and eventually of humanity.”

Keep reading with a 7-day free trial

Subscribe to The Gold Report to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Dr. Simone Gold · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture