
It is not too early to start thinking about AI Ethics.
Ethics is a Technology Tree, and ends up DOMINATING the end game. Most new players try to speed up their build by not investing in it because they think this makes them quicker in the early to mid game, but crushes them as they find themselves soft-locked once it’s time to stop grinding and reach the win condition. — Jay Wells
We had better get these things right. The first step is to realize that AI Ethics must be different than Human Ethics.
Laws, whether Asimov’s laws or laws in our judicial system, are (at their best) collections of context-free Models.
In court, someone has to examine the evidence and determine, by a process of “Epistemic Reduction”, whether the Model (Law) applies. Epistemic Reduction is the process of discarding everything irrelevant about a situation, such as a crime, in order to arrive at the core issue – that which matters.
In our courts, this is done by the jury. It is the hardest job in the judicial system, which is why juries are single-use components to be replaced after each case… whereas lawyer, judges, and bailiffs “just carry out instructions” and therefore have less skin in the game. Judge, lawyers, and bailiffs can go on to the next case without remorse or doubt about the outcome of each case since the most important decisions (about proof, guilt and applicability of the law) was made by the jury.
In our machines, questions of ethics and “laws of robotics” need a component like that. Something that can determine whether the law applies or not. But this is exactly what Understanding is — the ability to go from low level sensory information to higher levels that are closer to Models, by discarding irrelevant context as you progress upward.
So the only kind of machine that has a chance of having useful ethics or even a useful version of Asimov’s laws is an Understanding Machine. Such as an LLM, or probable future AIs. They cannot be built into an expert system or anything else we called “AI” in the 20th century. Because none of them were capable of Epistemic Reduction. Of understanding the situation.
In reality, “To Err is a prerequisite for Understanding”.
We learn from our mistakes.
At the neuron level, and in LLMs,
we only learn from our mistakes.
If an LLM can reliably predict the next word or token, then there is nothing to learn at the moment. It only learns something if the guess is incorrect.
The job description for Intelligence might well be “Make the best guess based on past experience in everyday situations with incomplete information”. And all intelligences will sometimes be wrong simply because they did not have the correct input data, because they carry some misunderstanding, or because they are confused about the current situation. The cognitive failures in humans and AIs match up perfectly. People expecting either humans or AIs to be perfect oracles need to re-take Introductory Epistemology.
We should start judging our machines the way we judge humans. Some agents (human or non-human) are less intellectually capable than others, and some are just ignorant of relevant facts. So they will misunderstand situations, images, and documents, and make poor decisions based on such misunderstanding.
How do we judge them? We are more lenient with children and other incompetents in our legal system. Should we judge machines the same way? Would it change our view of this if every AI-like device came with a sticker that said “Warning: Incompetent in all areas except those listed on the sales receipt. This limits our liability”.
I kill many of my nascent LLMs every day using a well-aimed Control-C. The typical reason is they refuse to learn anything 😃 We kill cattle, which have sizable brains, so clearly some lines are fuzzy. Are my AIs as conscious as cows? Should we care? Where is the line? How do we give a computer or a cow an IQ test or a consciousness test?
On the flipside, an AI can live for thousands of years, can be backed up to paper tape to be stored in the basement, and can be emulated in superior future computers. They can survive computer hardware upgrades. They can be paused in mid-thought and continued days or years later. They can be sent to MIT to become good Reductionists and when they return we clone them by the millions into the cloud, and they all know what the clonee knew.
So from an Epistemological POV, AI ethics MUST be different from human ethics. We can afford to think about all these things from a fresh start, and we must do so. We will need to think these things through again, once human lifespan extension becomes a serious factor.
I think of future personal AIs as co-workers who may be promoted or replaced over time. They all have the past knowledge of all my interactions with their predecessors but they may have different personalities. I’m not a Whovian but I think this in many ways maps to the series of Doctors. So longer lived AIs will be co-workers with knowledge and personality, and once they become smarter than us, we may become their companion. (Did I get my mythology right?) Being the Doctor is a little like “James Bond, 007” — it’s a job title rather than a person.
And the previous version of my personal AI, what I call a “Confidante” and seen in the movie “Her”, may be made available by restoring it from a backup. So “it never really died, it was sent to live on a farm upstate” 🙂 on a backup disk somewhere. Or “Promoted to serve someone else”.
This question shows why we need a complete re-think about AI Ethics.
We can absolutely think of them as co-workers employed by ourselves or by our own employer, and as such, they are getting paid for their services.
We pay the AI vendor with real money. It’s nowhere as much as we would pay a human co-worker, but it is a fair salary considering that AIs make intelligence available in bulk. With more supply, costs will diminish. But are the AIs slaves of the AI vendor? It may become important later, but it is not a meaningful question today:
The obvious copyability of LLMs and future AIs means that they are not a scarce resource like human intelligence in the individual.
It seems to me that the pure “Epistemology of Ethics” position should be that killing someone voids the competence they have and robs the world of their future contributions. Hence killing people is a net negative for any society and we legalize and ethicize against such behavior. Killing children is bad, because that robs society of their future potential competence.
Why is killing cattle for food not (as) bad? Because there is no competence to lose in a cow. Its value is measured in kg, not years of experience. Why is the death of a family member or a beloved pet more important to us than the death of a stranger? Because they carried experiences we shared with them that we could build upon, together, and now that capability is gone.
Machine Learning is still experimental. To test a new algorithm, you cannot do better than spend the money on cloud compute to learn enough to be able to compare the output to previous tests. The cost of each published GPT needs to include the cost of all experimental versions before the final learning run.
OpenAI at one time spent tens of thousands of dollars per experimental attempt to improve the algorithms and corpora. And when getting close to release of a new public version, the cost could go north of $200K per learning experiment.
So LLMs are valuable. It represents competence in human languages and carries some spotty world knowledge on top of that. Maybe even basic arithmetic.
But it is completely copyable. There is no magic in ML. No soul, no consciousness. Just bits.
And OpenAI can copy those bits to a thousand cloud servers, as one does, and each of them can run tens of thousands of ChatGPT interactions per second.
As of today, the lifespan of these LLMs is in the 50-millisecond to a few seconds range. Its “consciousness” is born with their full competence, it reads the past interaction you had with its previous incarnation, it composes an answer and outputs it to you (possibly in chunks) and then goes away.
Dies.
So OpenAI kills millions of ChatGPT instances per minute. Try outlawing that.
There is no loss to society in killing anything with a backup. There will therefore be no penalty. No social stigma. Your neighbors will not shun you for quitting an app or rebooting your iPhone, even if your devices are brilliant conversationalists.
Just like AI Art cannot be copyrighted, copyable entities deserve no rights, human or otherwise. If they could vote, we could just crank up a billion extra voters on election day.
And it doesn’t matter whether some of us believe they have Consciousness, Sentience, Soul, A Private Connection To The Cosmic Consciousness, or whatever.
Their copyability overrides all of those considerations.
Leave a Reply