ChatGPT Doesn’t Know Right From Wrong. Do You?
Sam Altman has a formula for ChatGPT's moral compass. It's not good.
In a recent and eye-opening interview, Tucker Carlson asked OpenAI CEO Sam Altman what moral framework the company uses for ChatGPT. “What is right or wrong according to ChatGPT?” Tucker asked.
Altman replied: “What I think ChatGPT should do is reflect that weighted average or whatever of humanity’s moral view which will evolve over time. And we are here to serve our users. We’re here to serve people. This is a technological tool for people.” He added that his role was “to make sure that we are accurately reflecting the preferences of humanity—or for now, of our user base—and eventually of humanity.”
Keep reading with a 7-day free trial
Subscribe to The Gold Report to keep reading this post and get 7 days of free access to the full post archives.



