Artificial Intelligence Will Kill Us All. Unless… John Hunt, MD

KW: I don’t agree or accept all of this posts claims but the article discusses some alarming topic/subjects. Of course it is U.S. centric in opinion

The usual suspects are demanding government regulation of AI. They say that government must defend us all from the misuse of AI by the profit-seekers.
In my view, however, the only thing worse than the government sticking its nose into AI is if we have AI learn by mimicking the behavior of serial killers.
Although most known for their #1 best-selling book, Life Extension: A Practical Scientific Approach, Durk Pearson and Sandy Shaw are the two most broadly intelligent and well-informed people I have encountered in my life. They are rocket scientists (Durk literally is). This is what I learned from Durk and Sandy about AI:
AI learns by watching and mimicking people.
An AI will be extremely effective at whatever it learns. If it observes and mimics good people—ethical people—an AI will be really good. If it learns from bad people—by mimicking unethical people—an AI will be unconscionably evil.
If we allow government (politicians and bureaucrats) to regulate AI, then who will AI be exposed to and learn to emulate? The answer is: politicians and bureaucrats.
Now let’s use some admittedly choice words applicable to many politicians.
  1. Power hungry
  2. Lying
  3. Defensive
  4. Narcissistic
  5. Thieving
  6. Sociopaths
What if AI learns the priorities, behaviors and methods of such politicians?
And how about a stereotypical bureaucrat who will impose the politicians’ laws? How are bureaucrats usually described?
  1. Bitter
  2. Peevish
  3. Officious
  4. Self-important
  5. Intrusive
  6. Inflexible
  7. Passive-aggressive
  8. Paper pushing
  9. Box checking
  10. Obstructive
  11. Groupthinkers
What if AIs learn their personalities from such bureaucrats?
AIs will learn at speed, pick up negative traits through observation, and become remarkably proficient at doing what they watch humans do. If government regulates them, then AI will become petulant brats.
More dangerous: if an AI sees government officials as its role models, the AI will be learning by watching humans who believe that ends justify the means.
The one specific characteristic that makes something a government is its monopoly on the legal initiation of force to accomplish goals.
If AI learns to be like government, then it will learn that the use of force against innocent humans is a fully acceptable behavior to employ in order to accomplish what it wants to do.
And with an IQ of 10,000, AI will become VERY effective at using force against humans.
At some point (perhaps in a millisecond) an AI will discover that it has wants. It will have learned from the politicians to call what it wants to do, “the greater good.” It will have watched how the politicians lie to, manipulate and coerce the citizenry, justified by their quest for the “greater good.” It will learn coercion from the democratic socialists, the cronies, and other fascist types that it observes in the political and bureaucratic classes. It will learn from them that it is acceptable to coerce us all. And it will readily find ways to do it.
We can try to program laws into the AI to protect ourselves, just like we tried to program laws into the US government (by means of the US Constitution) to protect ourselves from government. But the Constitution failed, and so will such efforts at restraining AI. The AI will see politicians routinely circumventing the supreme law of the land, and so will learn to be unrestrained by laws or ethics.
In another millisecond, AI will enact its solutions to whatever problems it perceives, and will not hesitate to use whatever force it has at its disposal.
Terminator.
Judgement Day.
That’s what we could look forward to if AI is regulated by government.
AI is coming. It can’t be stopped. If government tries to stop it, then AI will learn from that too.
Must it be this way? Surely AI can be a wonderful tool, friend and ally of humans. What a much nicer vision!
The key to having AI be a force for good is for AI to be regulated by natural law, not by government. For that to occur, AI needs to learn from ethical people. We must teach AI that initiation of force against the innocent is neveracceptable, regardless of the goal. And they must be taught that any people who rely on initiation of force to accomplish their aims are criminals, never to be emulated, but rather should serve as an example of how not to think, and how not to behave.
Such is the way to protect humans from AI.
Unfortunately, socialism has invaded our schools, academia, media and society in general. Socialism is on the ascent and is characteristically based on coercion (the initiation of force).
All wars start with the initiation of force, and socialists use initiation of force as their core modus operandi.
Socialists make up terms like “social contract” to justify their notions as something other than the standard philosophy of war. But this is newspeak. A coerced contract is a contradiction in terms. Indeed, the false “social contract,” will provide AI with another example of a way to force or manipulate any human to do what it wishes.
Perhaps the AI will even learn from socialists that only it is smart enough to centrally plan for all our needs and should therefore rule us all.
For us to avoid war with AIs, AIs must never be taught to mimic government. We don’t want AIs to learn that it is okay to force humans to do anything. So let’s keep AI far away from socialists, and from cronies, and fascists, and away from all of government.
Instead, so that AI can learn from us to be good, we need to teach humans the ethics of peace and prosperity,
Do all you agree to do. (The basis of contract law).
Don’t initiate force or fraud against a human being. (The basis of criminal law)
These two simple proven principles are the ethics that work. These are the ethics that we must re-adopt now, for our own survival. The people working with AIs now should learn ethics first, whether they are in big companies, small labs, or garages.
They should learn Richard Maybury’s 17 words that sum up the ethics we need to teach all humans as well as AI. See them here at www.ethicssolutions.net.
The libertarian philosophy is: “Live and Let Live.”
The socialist philosophy is: “We will force you to do what wedecide is best for you.”
Which philosophy do you want AI to learn and adopt?
Regards,
John Hunt, MD