I Asked ChatGPT If It Was A Threat To Humans - Here’s What It Said

By George Moen | Co-Founder–Publisher | WBN News Global

Subscribe To WBN News


“Are you a threat to humans?”

That was my question.

Not in jest. Not as a gimmick. But as a real, pressing inquiry posed to the most advanced conversational AI available to the public: ChatGPT.

And the answer? Short version: “No. But…”

The “but” is what we need to talk about.


What ChatGPT Says About Itself

When asked directly if it poses a threat to humanity, ChatGPT doesn’t flinch:

“No, I am not a threat to humans. I’m a language model, not a conscious being. I don’t have goals, emotions, or awareness.”

It goes on to clarify that it can’t act independently, doesn’t have access to the internet or systems, and is built with strict safeguards.

On the surface, this sounds reassuring—almost sterile. Like an obedient calculator with better vocabulary. But AI isn’t just a tool sitting idle. It’s a mirror. A magnifier. A multiplier of human intent. And that’s where the real concern starts.


The Hinton Effect

To get deeper insight, I looked at the man widely called the “Godfather of AI”: Dr. Geoffrey Hinton. His legacy includes the breakthroughs that made today’s AI—including ChatGPT—possible. He popularized the backpropagation algorithm, created neural architectures like Restricted Boltzmann Machines, and mentored future AI giants like Ilya Sutskever (OpenAI) and Yann LeCun (Meta).

But in 2023, Hinton walked away from his role at Google.

Not because he was silenced, but because he didn’t want to self-censor.

“I want to talk about AI safety. But I can’t do that while still at Google.”

That alone speaks volumes.


What Hinton Is Worried About

Hinton’s exit marked a turning point in public AI discourse. Here’s a breakdown of his concerns—many of which ChatGPT itself echoes:

1. Misinformation and Deepfakes

AI-generated content is already flooding the internet. From political propaganda to fake videos of celebrities saying things they never did, the information war is heating up—and AI is arming both sides.

2. Massive Job Displacement

AI is already automating work at scale. Writers, customer service agents, paralegals—no one is safe from the coming shift. For low- and mid-skill workers, this could be catastrophic.

3. Loss of Human Control

Large AI models are complex, often referred to as “black boxes.” Even their creators struggle to understand exactly how they reach conclusions. What happens when systems become too advanced to control?

4. Corporate Arms Race

Big Tech is in a gold rush to dominate AI. Google, Microsoft, OpenAI, Meta—they’re all shipping products fast. But few are pressing the brakes for safety.

5. Existential Risk

Hinton fears a future where AI becomes more intelligent than us—and no longer aligned with human goals. It’s not Skynet, but it’s not science fiction either. Thought leaders across the spectrum—from Elon Musk to the Future of Life Institute—share this concern.


So… Is ChatGPT Dangerous?

Not inherently.

But as ChatGPT candidly states, “The danger lies in how humans choose to use AI.”

Think of it like nuclear energy. It can power a city—or level one.

The same applies here. This tool can educate, innovate, and accelerate progress. But it can also be used to deceive, exploit, and disrupt entire industries. The real risk isn’t in the AI itself—but in the absence of oversight, ethics, and responsible usage.


Final Thoughts: The Mirror and the Matchstick

When I asked ChatGPT if it was a threat, it gave me a calm, factual answer.

But what shook me was the part it didn’t say directly:

We’ve already entered the age where AI magnifies whatever we feed it.
The question is no longer “What can AI do?”—but “What will we let it become?”

That’s what makes Geoffrey Hinton’s exit and warnings so important.

AI is not the villain. But it could enable villainy on a scale we’re not prepared for—unless we act now.

We need better laws. We need a culture of responsibility. And most of all, we need public awareness. That starts with asking hard questions… and listening carefully to the answers.


FACT CHECK:

  • Geoffrey Hinton resigned from Google in May 2023.
  • He publicly expressed concerns about AI risks, including misinformation, job loss, and the potential loss of control.
  • ChatGPT is a large language model created by OpenAI, not conscious or autonomous.
  • There is consensus among many top AI researchers about the need for stronger oversight and global regulation.

George Moen 📧 Contact: gmoen@wbnn.news

TAGS: #AI #AI Threats #Geoffrey Hinton #AI Regulation #ChatGPT #Ethical AI #Future Of Tech #WBN News Global

Share this article
The link has been copied!