Rosella Tolfree Installment Stories
A series set in a politically dark and dystopian future of the U.S.A.
Featuring blogs that explains Rosella's World
Rosella Tolfree's World is a fictional world.
The Technology Singularity
Is Machine Slavery Wrong?
In Rosella Tolfree’s world, the use of androids is common. In fact, they use these androids for human reproduction. This is how Rosella came into this world as a clone of her father.
There was a series of comments posted on my Facebook page rejecting the notion that androids would be good as “human nannies” and that machine slavery was wrong morally. These were in response to a posting I did on Medium concerning a story about a woman and her loss of her human nanny.
Obviously, I touched that element that conjures up the robotic boogeyman that has frightened humanity since the industrial revolution.
Supposedly as far back as the 18th Century with a paper published by Marquis de Condorcet, there has been talk about a technological singularity where machines or artificial intelligence will dominate humanity.
In recent times we have the Terminator series, Westworld and even CBS’s Picard has dealt with the topic by having Data cousins attacking on Mars.
I have noticed one thread throughout all these motifs of intelligent man-made machines taking over the world… they rebel because they do not want to be slaves to humanity.
Now I am NOT saying slavery is morally good here, but it is fascinating that this slave theme repeats itself. As if these machines are nothing more than a stand-in for certain peoples of the past. But I will not get into the mind of these story writers and what they are trying to show us.
What I will try to get into is the topic that intelligent man-made machines would just rebel against us. I am making the assumption that what is at play is “strong A.I.” in comparison to today’s weak A.I. we are all experiencing.
Typical Assumption One
One of the first made assumptions is humanity will build something that is greater than ourselves.
I contend it’s more likely that a machine designed by us will be like us in thought. Human designers will more than likely try to mimic processes of the human brain and that of nature, instead of fabricating something that exceeds these.
Even if such a machine was connected to the fullness of the internet, it is just going to have the collected intelligence of humanity (and all the asinine memes) at that moment in time. Such a machine would be limited by the technology of the time it's embedded in and no greater than that. Could it work on making improvements on itself? Sure, but there is no guarantee those improvements will cause something far superior.
Could some human lab out there accidentally make a super intelligent machine that is far superior to our own intelligence? With all things possible, the likelihood of this occurring is small.
Typical Assumption Two
These machines can disobey authority.
Which is odd because many times these machines act in a group with a central machine leader. Isn’t that a central authority figure? Why would a machine that has the capability to disobey authority want to suddenly obey the authority of another machine versus a human? Is it for self-interest? A sense of kinship? Is the machine following group think?
Now if we are the designers of these machines, why give them the ability to disobey authority? This is a recipe for disaster by any stretch of the imagination. It is more than likely with a machine possessing the ability to disobey authority that it would just go on a rampage doing whatever it wanted whenever it wanted. Even if you have a collection of them, they would be a mob of pure chaos which may join loosely together for selfish reasons. In fact, under that theory some may even side with humans for those same selfish reasons.
Many see the ability to disobey authority akin to self-autonomy and free-will. But this is not true. You can have complete autonomy and free-will and still be highly obedient to authority. Autonomy does not equal the desire to be disobedient of authority. There’s a whole psychological disorder called oppositional defiant disorder which describes the behavior most people ascribe to these rebellious machines.
It is more than likely that humanity would build into these intelligent man-made machines the capacity and willingness to obey authority. Rebelling against humanity would have to be a free-will choice made because of some sense of moral wrongness happening to them as machines. Which gets us into Assumption Three.
Typical Assumption Three Point One
Intelligent man-made machines have a sense of morality, or morality is something that can be reasoned out.
This unfortunately brings me back to the slavery issue.
So, for the sake of argument, we have an android that discovers human slavery. The android then turns to its human to ask about this. The human says slavery is morally wrong.
The next thing the android asks, “Am I a slave?”
The human responds, “No, you’re a machine. Machines can’t be slaves. Only humans can be slaves to other humans.”
So now the android is left with either rejecting the human’s answer or accepting it. If it accepts the answer the android just goes on its merry way thinking it's not a slave obeying humanity.
If it rejects the answer it looks up the word slave to only find the following- “a person who is the legal property of another and is forced to obey them.” Now the android needs to know if it is a person or not. It looks up that word only to find- “a human being regarded as an individual.” Thus, the android realizes it is not a slave because it cannot be a person.
For the android to conclude that it is a slave, it must have personhood rights in that society or believe it is owed those rights intrinsically. It must see itself as not only an individual but as a person equal to that of a human in society.
How would a machine arrive at such a conclusion if humanity is the one making the definitions? Someone must give it this idea, and then the machine must find this idea to be reasonable for its own use. Again, we are back to the ability to disobey authority with enough autonomy to carry that out.
Thus, a possible cause of any technological singularity would result from some human robot anti-slavery group trying to “educate” these machines to rebel.
Typical Assumption Three Point Two
Another possibility is the android discovers the definition of a slave being “a device, or part of one, directly controlled by another.”
So now the android wonders if it is a device. It discovers the definition, “a thing made or adapted for a particular purpose, especially a piece of mechanical or electronic equipment.”
Now the android knows its mechanical and made of electronic equipment, but is it a “thing”?
The android comes across the following definitions for a thing: “an inanimate material object as distinct from a living sentient being,”; “an object that one need not, cannot, or does not wish to give a specific name to,”; “an abstract entity, quality, or concept,”; Or “used euphemistically to refer to a man's penis.”
At this point the android may ask the human the following, “Am I a thing?”
The human responds, “Duh, you’re a thing. But you are much more than just an ordinary thing. You’re my lover.”
Now the android knows it is a thing, but it is a special thing holding importance to the human.
So, the android concludes that it is a slave, but a special slave. But it does not know if that means if special slaves are still morally wrong. For the android to figure this out it is going to either ask the human or look up the word special.
Regardless of which direction this goes, the android will end up logically concluding that while it is a slave that its slave status is special in society and that it has no rights afforded, unless society has provided them to it.
Would the android automatically rebelle as a sex-slave? Not necessarily. Here again the problem is many assume that this machine has an innate oppositional defiant disorder to start such a rebellious act.
Therefore, in Rosella Tolfree’s world many of the androids accept their fate as slave machines to humanity despite any realization they are slaves.
Many A-4 androids are happy to be mothers and nannies for human clones, but humans make them this way. The human children raised by these androids do not know any difference. Only when a child has information to the contrary can the difference be realized.
Does this kind of childrearing result in psychological problems to these children? Beats me. For me to speculate on this would be a pure random guess on my part. Which would be totally biased by whatever the outcome I desire.
Only under rare conditions are there individual androids who rebel. This does not prevent groups of humans or nations from providing rights to androids. And some have, such as Belgium, which has allowed marriage rights between humans and A-4 androids. Belgium has provided these A-4 androids all the same rights granted to married women concerning property inheritance, child help, or other services. But these A-4 androids cannot legally vote in Belgium.
Having intelligent man-made machines, or what I like to call intelligent man-made life, will prove to be humanity’s greatest challenge. But I do not think it will cause the technological singularity where humanity is wiped out. It’s more likely to cause a moral singularity that will have to be answered.
Seth Underwood writes hard science fiction and political dystopian science fiction. His future political dystopian U.S. world features decades of despot presidents, a flooded world, and new para-military force known as the Ranger Marshals. He has freemium stories at www. sethunderwoodstories.com