It's Debatable: Should the federal government regulate Artificial Intelligence?

In this week's It's Debatable column, Rick Rosen and Charles Moster explore whether the federal government should play a role in regulating AI. Rosen retired as a professor from the Texas Tech University School of Law and is a retired U.S. Army colonel. Moster is founder of the Moster Law Firm based in Lubbock with seven offices including Austin, Dallas, and Houston. 

Rosen - 1

I told Charles when we discussed this topic that I am a “dinosaur” and know little about AI. I was, however, a fan of HAL from the 1968 classic 2001: A Space Odyssey. As advanced as HAL was, it malfunctioned, and as it turned out from the 1982 sequel, it failed due to a human programming error.

I assume the same is true with respect to AI today: it can be of a great benefit, but because it is developed by humans it can also produce undesirable results. Indeed, in October 2023, President Joe Biden issue an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which stated that the irresponsible use of AI “could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”

Rick Rosen
Rick Rosen

To assist me in determining whether the federal government should regulate AI, I turned to ChatGBT, the “free-to-use” AI system. I received a very “lawyer-like” answer that told me very little: “[T]he question of whether government regulation of AI is a good idea is nuanced and requires careful consideration of various factors….”

I do not favor government regulation of viewpoints expressed by AI, understanding they reflect programmer biases. The marketplace can regulate unwelcome ideas, as demonstrated by Google’s Gemini AI program, which seemingly eliminated white people from human history, costing it $70 billion in market value. The United States must also not adopt the Chinese model of regulation, in which it uses AI to track its citizens, control information, and shape public opinion.

But there are areas that should be subject to federal regulation rather than relying on a patchwork of state laws. For example, the federal government should ensure that AI is not used to violate equal protection and civil rights laws governing employment, college admissions, and access to public benefits. It must protect the most vulnerable citizens from arbitrary decisions, such as United Healthcare’s alleged use of AI to override physician recommendations to deny elderly patients care owed through their Medicare Advantage plans. The government must ensure the legal accountability of those who develop and use AI to reveal private information, or defame individuals, or steal intellectual property, or employ deceptive trade practices.

The government should not stifle AI innovation; however, as AI-expert Luca Sambucci put it: “We should not expect technology companies to have humanity’s best interest at heart.”

Moster - 1

Curtailing the inevitable rise of artificial intelligence is foolhardy and dangerous. Whether we like it or not, AI is rapidly acquiring the capacity to exceed human intelligence which will result in a massive transformation of every aspect of our life, economics, and technology. Critically, if we prematurely curb the growth of AI, our adversaries surely will not. The consequences could be frightening and lead to the end of our very way of life.

Moster
Moster

On Aug. 2, 1939, Albert Einstein wrote the now famous letter to Franklin Roosevelt warning that if we did not move forward on developing an atomic bomb, Hitler and his Nazi’s would beat us to the “atomic punch”. At that time, the very notion that an atomic explosion was even possible was the subject of wild speculation and relegated to science fiction. Fortunately, FDR heeded the call to action and authorized the Manhattan Project made famous in the hit movie, Oppenheimer. As a result, the first atomic bomb lit up the early morning skies of the Jornado del Muerto desert in July of 1945 which led Oppenheimer to prophetically exclaim “now I am become death”.

That was certainly true as the inhabitants of Hiroshima and Nagasaki would attest just a month later. The atomic bombing of Japan led to the vaporization of tens of thousands of civilians and soldiers alike. Although the ethical implications are debatable to this day, it resulted in ending W.W. II.

But think for a moment, what if Einstein never sent that letter to FDR or worse, the U.S. imposed restrictions on the development of nuclear technology as Rick and I are now debating in this column? The Nazis and Japanese were well on their way to creating their own atomic weaponry and would have hit American cities in a heartbeat. Our very way of life could have ended in a mushroom cloud circa 1945. Is there any question that Hitler would have dropped the bomb if he had it first?

As a technologist, computer programmer, and attorney focused in the area of AI, I will echo the clarion call of Einstein to FDR and, indeed, wrote and disseminated such a letter to then President Trump in my book, How to Build an Enhanced Computer and Take Over the World (Amazon Books – 2019). This missive warned Mr. Trump that the

Russians and Chinese will beat us to the “AI Punch” and destroy our way of life if we fail to fund a Manhattan-like project and accelerate the development of a computer system which surpasses human intelligence. In particular, I referenced the rise of biocomputers first demonstrated by American technologist, Bill Ditto in 1999, which combined the functions of laboratory grown neurons (brain cells) with silicon chips to perform cognitive functions. Currently, entrepreneurial ventures mostly located in California, Russia, China, and possibly North Korea, are growing animal neurons in sufficient aggregates to perform military operations. A California based company has already rolled out a bomb sniffing drone consisting of approximately 40 neurons which was purchased and deployed by the U.S. Department of Defense. This is not science fiction. This is real.

It is estimated that the ability to harvest and grow one million neurons will light up the “cognitive skies” akin to the mushroom cloud over Journado del Muerto. With the same potency as the discovery of fire and the atomic bomb, a biocomputer made conscious by the aggregate of neurons and electronics will invariably change who we are.

The quandary for all of us is what comes next? If a million neurons could replicate the intellectual power of an MIT post-doctoral student, how about a billion neurons or quintillion? As I have written extensively in my books and online lectures, a gargantuan cognitive power will suddenly become available to its creators which will allow for a quantum leap in technology and science by a massive multiplier – lightyears would be a better descriptor. New and unimaginable military capabilities would suddenly become available to the proprietors of this biotechnology and instantly applied to their advantage.

Imagine the ability to vaporize an entire country in a microsecond and at the speed of light. How about the application of an electronic force field which surrounds the invading country making it impervious to attack?

Most physicists will tell you that “time travel” is possible but centuries beyond our technology. Let’s say, it suddenly wasn’t, and Putin used it as a tool to go back to the year 1753 when a 21-year-old inexperienced soldier by the name of George Washington first served in the French and Indian War. A single shot would likely lead to the stillbirth of the American Revolution and our country would cease to exist. If that timeline didn’t do the trick, Putin could knock off the other Founding Fathers in their infancy and no one would know the difference. If this sounds fanciful and outrageous, look back on Einstein’s warning to FDR and remember that the unleashing of an atomic bomb was only the subject of H.G. Wells science fiction in 1939. So is “Time Travel” circa 2024. Have you read “The Time Machine” by H.G. Wells?

Apart from the obvious military applications, AI will invariably change the way we work, live, and survive as a species. Like it or not, AI will displace millions of American workers mostly in the area of entry level jobs such as customer service, banking, retail, and office operations by 2035. During the same timeframe, industrial production (automotive, etc.) and trucking will be performed by intelligent and autonomous computer systems. For the record, current technology already exists to perform these functions.

On the flip side, AI will lead to the ultimate cure of diseases such as cancer and heart disease which have devastated all of us. Aging itself will finally be considered a disease and not an inevitable biological terminus. For those so inclined, immortality will be achievable.

And that’s just the beginning.

The risks of AI are palpable but cannot be avoided or mitigated. We cannot allow our enemies with no moral constraints to win the AI battle. If so, I am absolutely convinced that we will cease to exist as a society.

The enormous benefits in science and medical technology will finally allow us to create a true heaven on Earth. Of course, the opposite is quite possible, that is, if AI is stillborn.

Rosen - 2

I do not dispute Charles’ eloquent defense of AI. He is a brilliant visionary and has forgotten more about AI than I will ever know. I do not believe the federal government should establish roadblocks to AI’s development and use; however, AI operates on commands and data through algorithms created by human beings with all their foibles and prejudices. In some cases, these algorithms generate undesirable or unlawful outcomes.

Take AI’s use in determining the allocation of government benefits, employment decisions (e.g., hiring, promotion), and college and university admissions. The White House recognized algorithmic discrimination in its AI Bill of Rights, noting that such discrimination may occur when “automated systems contribute to unjustified different treatment or impacts disfavoring people based on … classification[s] protected by law.”

For example, federal statute prohibits discrimination based on race, color, national origin, and sex in employment and in any education program receiving federal financial assistance. Equal protection embodied in the Constitution’s Fifth and

Fourteenth Amendments forbids such discrimination by federal and state governments. Relying on AI to make employment and education decisions that are in fact discriminatory does not shield the decisions from government regulation and oversight.

The problem is as Professor William Jacobson and Kemberlee Kaye of Cornell’s Equal Protection Project recently explained, unlawful bias “operates in the shadows…. Algorithms can be … used to elevate certain groups over others.” And “because algorithms operate out of sight and undercover,” discrimination is difficult to prove. A 2023 study published in Humanities and Social Sciences Communications, verifies that “algorithmic bias results in discriminatory hiring practices based on gender, race, color, and personality traits.” Government must have the regulatory means of discerning and preventing such discrimination, including the ability to scrutinize the algorithms used in developing AI’s decision-making process.

Charles focuses on how AI will benefit our nation and people. I fully agree with all his comments. Charles’ discussion reminds me of an expression I picked up in the Army: “Big Hands, Small Map.” It is a broad overview of AI, and as a general principle, the federal government should not impede AI’s development. When orchestrating the actual implementation of AI, however, there is a need for federal and state intervention to ensure that AI operates fairly and lawfully.

Moster - 2

I appreciate Rick’s more moderate position on regulation of AI but reaffirm that any federal involvement will invariably go bad. My experience and conclusions date back to a full second term at the Reagan Administration where I served as an attorney and pretty much single-handedly spent several billions of the people’s money. This encounter taught me that federal ineptitude strikes both sides of the aisle notwithstanding the revisionist fond recollections of Republicans about the good old days of Reagan fiscal conservativism. It never happened.

The feds are clueless and will never understand the complexities and nuances of AI. To give them any oversight would be a debacle.

The ultimate risk and potential need for regulation stems from the inevitable consequence, in my opinion, of AI gaining supra-human intelligence. The great late physicist Stephen Hawking framed this issue succinctly when asked by a reporter “What would happen if machines become better at designing themselves than humans are?”. Hawking’s answer was frightening: “If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds snails.”

For the record, we don’t treat snails, bugs, and even intelligent mammals such as apes, dolphins, and whales – with any special deference or care. I squashed a few mosquitoes yesterday. My read on the true risks of AI is that supra-conscious machines with a “bad attitude” or “biased programming” will allow the acquiring nation to gain world control or worse (if that’s possible) – make all of us subservient to the machines themselves. That said, control of AI is not the solution as that will abdicate our ability to win the “AI War” and allow our adversaries to be victorious.

Like it or not, conscious computers based on laboratory grown neurons will come online in the next decade and whoever harnesses this technology first will become the dominant world power. As opposed to considering regulation to achieve amorphous goals, we need to be promoting the AI equivalent of the Manhattan Project and Space Race in acquiring this technology before the Axis of Evil does (Russian, China, North Korea, and Iran). This technology will give them immediate access to military applications light years beyond our capabilities. Recalling Hawking – our military forces will have the prowess and effectiveness of “snails."

An equally malevolent risk would be the ability of a biocomputer coming online with an exceedingly nasty temperament and hatred of all humans regardless of ideology. This is quite possible as current biosystems utilize rodent brains as the source of neuronal tissue. Ever corner a rat? They are relentless and not prone to enter into negotiations. Imagine a rodent based biocomputer utilizing a quintillion (1 followed by 18 zeros) neurons? By way of comparison, the human brain has 86 billion neurons. Get my point.

A malevolent and vastly intelligent system with an inherent biological “nasty temperament” would either control the human race or eliminate us in a heartbeat. That’s what is likely coming and to an enemy laboratory soon.

The only way out is not to limit AI development but accelerate. We need to launch now a government funded crash program to develop AI. Contrary to my disdain of government, the Manhattan Project did work and could be applied to an American effort to gain first control of supra-human AI.

The point is that whoever first acquires this technology will become the dominant and only world power. Only one party can win, and it has to be the United States and our democratic allies. If our enemies reach the finish line first, the result will be catastrophic.

And thus, I predict that there will soon be a battle to develop the first conscious AI akin to the “space race”. This is not the time to create government handicaps to AI development. We need to accelerate our efforts.

We must heed Einstein’s advice to FDR and reapply it to the technological risks of the 21st Century.

This article originally appeared on Lubbock Avalanche-Journal: It's Debatable: Should federal government regulate Artificial Intelligence?