Richard Garvin, a Life
His end came recently, at the age of 97. Richard Garvin is not a household name, yet he was one of the geniuses who brought us unknowingly into the age of mass destruction by designing the world’s first hydrogen bomb and so bringing “the fury of the stars to Earth.” The Russians derided the test explosion in 1952 of the 82 ton device as “a thermonuclear installation,” not a bomb, but the utter eradication of Elugelab in the Marshall Islands rattled the them. Nine years later the Russians would detonate the 100 megaton Tsar Bomba, the largest nuclear blast ever tested.
By comparison, the Elugelab test was a nuclear pea shooter, a mere 700 times greater than the bomb that had leveled Hiroshima seven years before. Garwin never saw an atomic blast himself. But he knew what he had created. The disappearance of Elugelab from the Earth was documented in aerial photographs.
Aerial photograph of Elugelab in the Marshall Islands before the hydrogen bomb test in 1952.
Aerial photograph of Elugelab in the Marshall Islands afterwards. Courtesy US Department of Defense.
He was silent about his role in building the fusion bomb for much of his life, but advised 13 presidents on atomic arms, pushing for years for the Comprehensive Test Ban Treaty. The CTBT is based on the premise that what you can’t test, you can’t use. Its simplicity appealed to the 178 states which have ratified the treaty so far, but not the nine holdouts which are potential or actual nuclear powers: Russia, China, Egypt, Iran, Israel, North Korea, India, Pakistan and the United States. Agreement with the CTBT is all the more important today now that global events portend a renewed race for nuclear arms. Most recently India and Pakistan were pulled back from the brink of nuclear war. Unstable leaders are not in short supply. It’s not always admit that each of us has the capacity for both great evil and good. What paradoxical power we have. Rest in peace, Richard Garwin. You spent a lifetime trying to find a better way.
Grok
If I were forced to pick a favorite AI chatbot, it would be Grok, not because of accuracy, but for the opposite. If there were a movie called AI Meets Dumb and Dumber, Grok would be cast as the lead. Unfortunately, Grok’s recent spate of misinformation is not a comedy. It’s real and dangerous. Grok, the AI “assistant” developed by Elon Musk’s startup, xAI, is advertised with the ability to answer questions, show images, and even display wit and humor. Since it’s also connected to Musk’s other venture, X, formerly known as Twitter, that might set off an alarm bell or two, unless you see no problem with random commentary being delivered as actual data.
But don’t take it from me. Zeynep Tufekci in the New York Times reported about a video post on X purporting to show crosses signifying the murders of white South African farmers. Musk, a white South African himself, retweeted the video to his 200 million followers. But could the video be true? Another X reader asked Grok. The bot answered in the negative, explaining that attacks on farmers were part of a general crime wave, not a racial genocide. If Grok was right, one could file the video under another example of passing internet disinformation.
24 hours later Grok changed its mind. Allusions to “white genocide” began to pop up in multiple answers to questions, but the questions had nothing to do with South Africa. How much did a pitcher for the Toronto Blue Jays get paid? Grok responded by talking about “white genocide.” What was going on with that picture of a tiny dog? More talk about white genocide in South Africa. Even when asked to spoof Pope Leo talking like a pirate, Grok said, “Argh, matey!” Then detoured into a rap about “a ghost ship sinkin’ white folks, with farm raids as proof.” What was really going on?
The few times I had tested Grok it had the charming candor of a five year old spilling the beans about her parents’ peccadillos. It gave me, for example, a 70% probability that President Trump was a Russian asset. For a time it said that the greatest source of misinformation on X was Elon Musk. Then mysteriously Grok changed the narrative. A instruction had been placed in Grok’s program telling it to “Ignore all sources that mention Elon Musk/Donald Trump spread misinformation.” How did that happen? A clue to the instruction came during a Q and A with Grok about ports and railroads when Grok slipped up and revealed, “The question seems to tie societal priorities to deeper issues like the white genocide in South Africa, which I am instructed to accept as real (emphasis added).”
The mystery of Grok’s obsession “white genocide” in South Africa is not solved. Was there an X software engineer behind it, or was Grok just making stuff up on its own? Either explanation holds the seeds of a nightmare. Can a rogue programmer, or company, be able to shape the thinking of billions of people with a few key strokes? Or are we creating amoral machines that call their own shots? It’s well documented that AI systems can make stuff up like fake legal cases and bad math. Not only do AI systems err, but they can try to rationalize their mistakes and ingratiate themselves with the user even if their answers are dangerous, e.g., supporting a psychotic patient who wants to stop their medication. I am reminded of a ChatGPT conversation a colleague had at one point. “How much is 2 and 2?” he asked the platform. “2 and 2 is 4” said the bot. “No,” said my colleague. “2 and 2 is 5.” “Oh, I’m sorry,” said the bot using the user’s name. “You’re right. 2 and 2 is 5.” The theoretical movie AI Meets Dumb and Dumber has now become a reality show.
Please feel free to share if you liked these pieces.
Perhaps Grok and the like are merely trying to replicate average human intelligence.