It is rare that I read a single magazine article online that prompts a missive. By my own count, this is my eighth short article dealing with the topic of artificial intelligence (A.I.), generative A.I., and more significantly, general artificial intelligence (A.G.I.). Indeed, I have written more about A.I. than almost any other single topic. In this vein, my attention was especially drawn to a recent article by Andrew Marantz appearing in The New Yorker under the title “Among the A.I. Doomsayers,” concerning the ongoing intellectual debate—primarily on the West Coast—between those camps who think machine intelligence will transform humanity for the better, and those who fear A.I. may destroy us.
Why, you may ask, do I remain so interested in this topic?
After all: the US presidential elections are only 236 days away. The Gaza war continues, prompting criticisms by celebrities at the Oscar Awards (and aren’t they the most important people to listen to), street demonstrations, and a rising tide of antisemitism; Hezbollah in Lebanon launched a new volley of missiles at Israel, followed by an immediate response in the Bekaa Valley; the Houthis remain a threat to shipping in the Red Sea; Ukraine says more weapons are desperately needed to forestall a new Russian advance (with the war now entering its third year); Russian and Chinese naval forces will join Iranian naval units in a major maritime exercise in the Middle East; the FBI Director testified before Congress last week that among the estimated 8-10 million illegal immigrants who have flooded across our southern border during the Biden administration are likely ISIS recruits and an untold number of Chinese young men of military age; China and Russia announce they will collaborate to build an unmanned nuclear station at the lunar South Pole; Elon Musk’s Space X successfully launches its heavy Starship on the third attempt; Congress is moving to ban TikTok for ties to the Chinese Communist Party; domestic and international fallout continues in the wake of the uncommonly feisty speech of President Biden at the State of the Union address (did Biden actually apologize for using the term “illegal” to describe Laken Riley’s murder suspect while failing to offer condolences to the family or mention her by her correct name during the State of the Union address?); and the announcement that soon we will be paying more for the interest on the national debt than the entire outlay for Defense Department expenditures.
But we can always print more money, right?
And in the midst of all this, Jeemes, you want to write another piece about A.I.?
Why?
For many years, I assumed many of these changes would take much longer to gestate and materialize.
When I write and think about the future—and what it will mean for the spiritual destinies of my children and grandchildren—the most difficult futuristic piece for me to fit into the puzzle has been how to gauge the progress to be made by artificial intelligence. Specifically, will A.I. advance exponentially toward the so-called “singularity,” that point where computer-based intelligences become indistinguishable from human-based intelligence. I read an article this morning where Ray Kurzweil, a futurist and former Google engineer, who first brought the notion of the future “singularity” into popular techno-parlance, has moved up his prediction for the occurrence from 2045 to 2029. And that, my friends, is not too far away.
Or, on the other hand, will A.I. march forward in sporadic “starts and fits” of breakthroughs? If you would have asked me that question three years ago, I would have said all the available evidence would support that trajectory. But that was before the ChatGPT revolution and today’s race using A.I. to develop ever more powerful training models.
Even harder to believe (at least for me) is that ChatGPT is yesterday’s news in an exponentially changing technology landscape. For example, technology watcher Will Knight is reporting on a start-up called Cognition AI that has released an A.I. program called “Devin,” the latest and most polished of an emerging class of A.I. “agents,” which instead of providing answers or advice about a problem presented by a human can take action to solve it.
Whether you think A.I. is an irresistible force charging relentlessly forward, or will advance in fits and spurts, the real question is what the world will look like for Christian believers (and my grandchildren) in 30-35 years? A case in point. My grandson Joshua will graduate from high school this year. He plans to go to the same college (now a university) where I attended and study psychology. I am going to recommend to him, if he wants to stay in that field, to specialize in an area that combines human psychology with working alongside automated systems. Right now, that seems like sound advice.
Today’s perceptual tension between two future A.I.-related worldviews dominates the debate today, and this is the essence of Marantz’s fascinating article. On one side are the techno-optimists (they call themselves “effective accelerationists”—or e/accs), and they essentially believe that A.I. will usher in a utopian future for all humanity. That is, as long as the worriers get out of the way. On social media, they troll doomsayers as “decels,” or even worse, “regulation-loving bureaucrats.”
Standing at the opposite extreme are the doomsayers, or the P(doom) camp, whose “timelines” are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song or a bestseller novel, making a Nobel-worthy scientific breakthrough, or achieving true artificial intelligence (that point at which a machine can do any cognitive task that a person can do.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everybody on the planet.
From our present perceptual vantage point, it looks like A.I.-enhanced technologies are destined to become the skeletal framework upon which the other advances—in biogenetics, communication technologies, the metaverse, quantum applications, etc.—will hang. (As I have written previously, all of this assumes the absence of a totally unpredictable, but game-changing, “Black Swan” event over the next decade or so. And we are long overdue.)
Perhaps this is a long-winded way of explaining why A.I.-related topics have preoccupied my thinking for decades.
What makes this quest especially unusual is that my three professions—college history professor, intelligence analyst and lawyer—rely on much different spheres of thinking, and perceptual approaches, to arrive at a conclusion on the topic.
Perhaps I should have put this apology right up front; it is the future, and what it holds for believers, that turns my intellectual wheels and triggers my creative juices. Sorry, it is the way I am wired …
But A.I. is so much more than a topic for futurists and technologists to discuss at Bay Area “scenes.”
As I was strolling through a Border’s Bookstore last weekend with my son-in-law and two grandkids, I noticed the recently published book 2054 by Elliot Ackerman and James Stavridis, concerning the role of A.I. in future conflicts. These same two men wrote one of my favorite books on the topic called 2034 (where China neutralizes the US “eyes in the sky” advantage with a sneak attack and wins the opening bouts of a future war in the Pacific). At any rate, the authors combined for an essay piece in The Wall Street Journal in March 2024 asserting, among other things, that on today’s battlefield drones appear to be a manageable threat but in the future, when hundreds of them are harnessed to A.I. technology, they will become a tool of conquest. As they note in the piece: “the drone will change the face of warfare when employed in swarms directed by AI. This moment hasn’t yet arrived, but it is rushing to meet us. If we’re not prepared, these new technologies deployed at scale could shift the global balance of military power.”
How true. As a former military analyst in the intelligence community, I remember being invited to a military “game” scenario set 50 years in the future in the Taiwan Straits. It was an incredible experience. I learned firsthand how attached naval leaders were to their high-ticket platforms such as aircraft carriers. (In the recent naval deployment following the Oct. 7 Hamas massacre and the Israeli response in Gaza, the US sent two carrier battle groups, one headed by the most expensive warship in history–$13 billion—the USS Gerald Ford, on its maiden voyage. For that same cost a nation could purchase over 650,000 Iranian-made Shahed drones).
The essay also talks about how AI pattern recognition patterns are changing the “OODA loop”—observe, orient, decide, act—advanced in the 1950’s by USAF fighter pilot John Boyd. In a conflict, the theory holds, the side that can move through its OODA loop fastest possesses a decisive battlefield advantage. Transformational warfare in the future will not be a race for the best platforms but rather for the best A.I. directing those platforms—in their words: “warfare is headed toward a brain-on-brain conflict” … “a war of OODA loops, swarm versus swarm.” At present the US insists that a human decision maker must always remain in the loop before any AI-based system might conduct a lethal strike. Will our adversaries show similar restraint?
I doubt it.
By the way, did I tell you that my grandson Joshua last week received a card to register for the Selective Service (draft)?
“Sigh.”
A.I. changes will affect my grandson’s future decisions, the political process (our first true A.I. presidential election replete with “deepfakes”), and the very nature of war.
Stay tuned.
Jeemes Akers is about to publish the second novel in his futuristic techno-Christian trilogy, Prawnocuos Resplendent. You can purchase this book and the first in the trilogy on http://jeemesakers.com. The new book continues the story of a group of Christian youths, living some 30-35 years in the future, who are dealing with an increasingly techno-paganist world as they, and newfound friends, frantically race around the globe in a bid to halt the next pandemic and fend off powerful global technological megacorporations.