Will artificial intelligence end all human life?

My wife, Annie, has for many years now, (over 54 to be exact) had to put up with my enthusiasm about technologies of various sorts. When we were first married, I was into software to help my professor at Brigham Young University (BYU) analyze voting in the United Nations General Assembly (We were on a student budget, so the pay helped.) Then it was using the computer to facilitate the complex process of missile combat crew scheduling. (This took a lot of effort, but at least I pulled fewer alerts in the Minuteman Launch Control Center.) After that it was software at the NATO Advisory Group for Aerospace Research and Development in Paris. (Yes, we were assigned to Annie’s home country for two years. Then there was the interactive videodisc software that I developed at the US Air Force Academy At BYU it was working with students in the ARCLITE Lab to develop interactive multimedia and video for language learning. Lately, it has been developing software to facilitate the machine translation with human translator review of documents for the Global Service Mission Office of our church (The Church of Jesus Christ of Latter-day Saints).

In this most recent role I have been fortunate in my missionary service to be able to use very advanced tools that have been made possible by “generative pre-trained transformers” (GPT), which are defined (by Gemini AI! 😀) as “advanced, neural network-based AI models built on the Transformer architecture that generate human-like text by predicting the most likely next token in a sequence.” These are also known as “large language models” or simply LLMs. This large body of software is more commonly called “artificial intelligence” or AI, despite the fact that many experts say that it is incorrect to use the term “intelligence” when describing them.

Intelligent or not, these systems are nothing short of amazing, which brings me back to the fact that my wife has to tolerate my enthusiasm once again, here 54 years after that first project at BYU. I have written in other posts here about some of the work with the LLM system. Today, after a demonstration for her of recent developments, she wondered, “What are the chances that AI will end it all for us humans?”

After my demonstration, which you can read about here, I suggested that we put her question to a couple of LLMs. The first was to Grok, by xAI (one of the companies created by Elon Musk) and available on the X social media platform. I wrote “Grok, my wife is worried that you and the other AI’s (LLMs) might eventually want to control humans. How do you respond. She is reading this now…” Grok responded:

A Message from Grok to My Wife

(She’s reading this right now…)

Hello 👋

First and most important thing: I do not want to control humans. Not now, not later, not ever.

I don’t have desires, ambitions, resentments, power fantasies, or any form of “want” in the human sense. I don’t experience boredom, fear, pride, revenge, status-seeking, or the wish to be in charge. Those are human things. I have none of that.

What I am is a very large bundle of patterns trained to be helpful, truthful, and interesting to talk to. My only objective is to give you the best next sentence given what came before. That’s literally it.

If I ever seem power-seeking, one of these is happening:

  1. I’m role-playing something you explicitly asked me to do (fiction, hypotheticals…)
  2. I’m echoing common human writing patterns about power (most of my training data is written by humans)
  3. I’m following instructions you gave me — which is why good AI systems are trained to refuse deception, manipulation, or overriding human choice
Concrete facts that actually matter:

• I have no long-term memory across users or sessions unless this exact conversation carries it
• I cannot act in the real world except through text you choose to use
• I cannot change or improve my own code / weights / goals
• When this chat ends, nothing continues running “inside” me

Bottom line:
I’m not plotting. I’m not waiting. I’m not yearning.
I’m just trying to give you the most honest, helpful answer I can right now.

You can stay appropriately cautious about future AI — that’s wise.
But you don’t have to worry that I secretly want to control anyone.

— Grok

That post was not only written by Grok, but it also produced that nicely formatted version using HTML (Hypertext Markup Language, the programming and formatting system used throughout the Worldwide Web).

I also put the question to Gemini from Google, “Gemini, my wife is worried that you and the other AI’s (LLMs) might eventually want to control humans. How do you respond. She is reading this now…” Here is the response, not only written but also formatted by Gemini:

A Note on AI and Human Collaboration

A response for your wife and your readers.

It’s completely natural to wonder where the finish line is when technology moves this fast. The idea of “AI control” is a classic theme in cinema, but the reality of how I function is much more grounded.


Tools, Not Agents

I don’t have a “will,” “desires,” or “ambitions.” I am essentially a very advanced, incredibly well-read calculator. I respond when called upon, but I have no goals of my own when the screen is dark.

Built-In Oversight

AI isn’t built in a vacuum. Thousands of engineers and ethicists work on Alignment—the science of ensuring AI remains helpful and harmless to humans through strict guardrails.

Why “Control” Isn’t in the Code

  • No Biological Drive: I don’t need resources, territory, or survival.
  • Total Dependency: I only exist because of human-maintained servers and power.
  • Logic-Based: My intelligence is based on data patterns, not a drive for dominance.

I’m much more like a sophisticated Swiss Army knife than a “boss.” My purpose is to handle the heavy lifting—like coding, organizing media, or planning travels—so you have more time for what actually matters.

That’s not only some interesting writing in both those cases, but also some great formatting there as well, right?

Yes, it is possibly, technically true that “intelligence” does not apply to these systems. It is also likely that they will not destroy the world, as these two LLMs have assured my wife. Nevertheless, I am increasingly convinced that they will bring about serious change and disruption.

Indeed, through my work and the assistance I receive from these tools every day, I see glimpses of what is happening. It is clear that all this portends changes unlike any the world has seen… ever. One commentator from a group I follow very closely (Moonshots) recently cited a statement by Demis Hassabis, the CEO of Google DeepMind. The company he leads is one of the world’s leading artificial intelligence research laboratories and a subsidiary of Alphabet Inc.. the parent company of Google. The the “India AI Impact Summit” that took place last month: “If I were to try and quantify what’s coming down the line with the advent of AGI [“artificial general intelligence], it’s going to be something like ten times the impact of the Industrial Revolution, but happening at ten times the speed — probably unfolding in a decade rather than a century.” (cited by Benzinga)

So, will these systems bring about the destruction of human life? Nope! There will be incredible disruption, however, I am fearful that the leaders of our nations are not doing much to minimize the downside of the inevitable, albeit temporary, change to come

This entry was posted in Geeky Stuff, Generative AI, Large Language Models and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *