Robot Visions
Essays Robots In Combination
- Background:
- Text Font:
- Text Size:
- Line Height:
- Line Break Height:
- Frame:
I have been inventing stories about robots now for very nearly half a century. In that time, I have rung almost every conceivable change upon the theme.
Mind you, it was not my intention to compose an encyclopedia of robot nuances; it was not even my intention to write about them for half a century. It just happened that I survived that long and maintained my interest in the concept. And it also just happened that in attempting to think of new story ideas involving robots, I ended up thinking about nearly everything.
For instance, in the sixth volume of the Robot City series, there are the "chemfets," which have been introduced into the hero's body in order to replicate and, eventually, give him direct psycho-electronic control over the core computer, and hence all the robots of Robot City.
Well, in my book Foundation's Edge (Doubleday, 1982), my hero, Golan Trevize, before taking off in a spaceship, makes contact with an advanced computer by placing his hands on an indicated place on the desk before him.
"And as he and the computer held hands, their thinking merged...
"...he saw the room with complete clarity-not just in the direction in which he was looking, but all around and above and below.
"He saw every room in the spaceship, and he saw outside as well. The sun had risen...but he could look at it directly without being dazzled...
"He felt the gentle wind and its temperature, and the sounds of the world about him. He detected the planet's magnetic field and the tiny electrical charges on the wall of the ship.
"He became aware of the controls of the ship...He knew...that if he wanted to lift the ship, or turn it, or accelerate, or make use of any of its abilities, the process was the same as that of performing the analogous process to his body. He had but to use his will."
That was as close as I could come to picturing the result of a mind-computer interface, and now, in connection with this new book, I can't help thinking of it further.
I suppose that the first time human beings learned how to form an interface between the human mind and another sort of intelligence was when they tamed the horse and learned how to use it as a form of transportation. This reached its highest point when human beings rode horses directly, and when a pull at a rein, the touch of a spur, a squeeze of the knees, or just a cry, could make the horse react in accordance with the human will.
It is no wonder that primitive Greeks seeing horsemen invade the comparatively broad Thessalian plains (the part of Greece most suitable to horsemanship) thought they were seeing a single animal with a human torso and a horse's body. Thus was invented the centaur.
Again, there are "trick drivers." There are expert "stunt men" who can make an automobile do marvelous things. One might expect that a New Guinea native who had never seen or heard of an automobile before might believe that such stunts were being carried through by a strange and Monstrous living organism that had, as part of its structure, a portion with a human appearance within its stomach.
But a person plus a horse is but an imperfect fusion of intelligence, and a person plus an automobile is but an extension of human muscles by mechanical linkages. A horse can easily disobey signals, or even run away in uncontrollable panic. And an automobile can break down or skid at an inconvenient moment.
The fusion of human and computer, however, ought to be a much closer approach to the ideal. It may be an extension of the mind itself as I tried to make plain in Foundation's Edge, a multiplication and intensification of sense-perception, an incredible extension of the will.
Under such circumstances, might not the fusion represent, in a very real sense, a single organism, a kind of cybernetic "centaur"? And once such a union is established, would the human fraction wish to break it? Would he not feel such a break to be an unbearable loss and be unable to live with the impoverishment of mind and will he would then have to face? In my novel, Golan Trevize could break away from the computer at will and suffered no ill effects as a result, but perhaps that is not realistic.
Another issue that appears now and then in the Robot City series concerns the interaction of robot and robot.
This has not played a part in most of my stories, simply because I generally had a single robot character of importance in any given story and I dealt entirely with the matter of the interaction between that single robot and various human beings.
Consider robots in combination.
The First Law states that a robot cannot injure a human being or, through inaction, allow a human being to come to harm.
But suppose two robots are involved, and that one of them, through inadvertence, lack of knowledge, or special circumstances, is engaged in a course of action (quite innocently) that will clearly injure a human being-and suppose the second robot, with greater knowledge or insight, is aware of this. Would he not be required by the First Law to stop the first robot from committing the injury? If there were no other way, would he not be required by the First Law to destroy the first robot without hesitation or regret?
Thus, in my book Robots and Empire (Doubleday, 1985), a robot is introduced to whom human beings have been defined as those speaking with a certain accent. The heroine of the book does not speak with that accent and therefore the robot feels free to kill her. That robot is promptly destroyed by a second robot.
The situation is similar for the Second Law, in which robots are forced to obey orders given them by human beings provided those orders do not violate the First Law.
If, of two robots, one through inadvertence or lack of understanding does not obey an order, the second must either carry through the order itself, or force the first to do so.
Thus, in an intense scene in Robots and Empire, the villainess gives one robot a direct order. The robot hesitates because the order may cause harm to the heroine. For a while, then, there is a confrontation in which the villainess reinforces her own order while a second robot tries to reason the first robot into a greater realization of the harm that will be done to the heroine. Here we have a case where one robot urges another to obey the Second Law in a truer manner, and to withstand a human being in so doing.
It is the Third Law, however, that brings up the knottiest problem where robots in combination are concerned.
The Third Law states that a robot must protect its own existence, where that is consistent with the First and Second Laws.
But what if two robots are concerned? Is each merely concerned with its own existence, as a literal reading of the Third Law would make it seem? Or would each robot feel the need for helping the other maintain its own existence?
As I said, this problem never arose with me as long as I dealt with only one robot per story. (Sometimes there were other robots but they were distinctly subsidiary characters-merely spear-carriers, so to speak.)
However, first in The Robots of Dawn (Doubleday, 1983), and then in its sequel Robots and Empire, I had two robots of equal importance. One of these was R. Daneel Olivaw, a humaniform robot (who could not easily be told from a human being) who had earlier appeared in The Caves of Steel (Ooubleday, 1954), and in its sequel, The Naked Sun (Ooubleday, 1957). The other was R. Giskard Reventlov, who had a more orthodox metallic appearance. Both robots were advanced to the point where their minds were of human complexity.
It was these two robots who were engaged in the struggle with the villainess, the Lady Vasilia. It was Giskard who (such were the exigencies of the plot) was being ordered by Vasilia to leave the service of Gladia (the heroine) and enter her own. And it was Daneel who tenaciously argued the point that Giskard ought to remain with Gladia. Giskard has the ability to exert a limited mental control over human beings, and Daneel points out that Vasilia ought to be controlled for Gladia's safety. He even argues the good of humanity in the abstract ("the Zeroth Law") in favor of such an action.
Daneel's arguments weaken the effect of Vasilia's orders, but not sufficiently. Giskard is made to hesitate, but cannot be forced to take action.
Vasilia, however, decides that Daneel is too dangerous; if he continues to argue, he might force Giskard his way. She therefore orders her own robots to inactivate Daneel and further orders Daneel not to resist. Daneel must obey the order and Vasilia's robots advance to the task.
It is then that Giskard acts. Her four robots are inactivated and Vasilia herself crumples into a forgetful sleep. Later Daneel asks Giskard to explain what happened.
Giskard says, "When she ordered the robots to dismantle you, friend Daneel, and showed a clear emotion of pleasure at the prospect, your need, added to what the concept of the Zeroth Law had already done, superseded the Second Law and rivaled the First Law. It was the combination of the Zeroth Law, psychohistory, my loyalty to Lady Gladia, and your need that dictated my action."
Daneel now argues that his own need (he being merely a robot) ought not to have influenced Giskard at all. Giskard obviously agrees, yet he says:
"It is a strange thing, friend Daneel. I do not know how it came about...At the moment when the robots advanced toward you and Lady Vasilia expressed her savage pleasure, my positronic pathway pattern re-formed in an anomalous fashion. For a moment, I thought of you-as a human being-and I reacted accordingly."
Daneel said, "That was wrong."
Giskard said, "I know that. And yet-and yet, if it were to happen again, I believe the same anomalous change would take place again."
And Daneel cannot help but feel that if the situation were reversed, he, too, would act in the same way.
In other words, the robots had reached a stage of complexity where they had begun to lose the distinction between robots and human beings, where they could see each other as "friends," and have the urge to save each other's existence.
Mind you, it was not my intention to compose an encyclopedia of robot nuances; it was not even my intention to write about them for half a century. It just happened that I survived that long and maintained my interest in the concept. And it also just happened that in attempting to think of new story ideas involving robots, I ended up thinking about nearly everything.
For instance, in the sixth volume of the Robot City series, there are the "chemfets," which have been introduced into the hero's body in order to replicate and, eventually, give him direct psycho-electronic control over the core computer, and hence all the robots of Robot City.
Well, in my book Foundation's Edge (Doubleday, 1982), my hero, Golan Trevize, before taking off in a spaceship, makes contact with an advanced computer by placing his hands on an indicated place on the desk before him.
"And as he and the computer held hands, their thinking merged...
"...he saw the room with complete clarity-not just in the direction in which he was looking, but all around and above and below.
"He saw every room in the spaceship, and he saw outside as well. The sun had risen...but he could look at it directly without being dazzled...
"He felt the gentle wind and its temperature, and the sounds of the world about him. He detected the planet's magnetic field and the tiny electrical charges on the wall of the ship.
"He became aware of the controls of the ship...He knew...that if he wanted to lift the ship, or turn it, or accelerate, or make use of any of its abilities, the process was the same as that of performing the analogous process to his body. He had but to use his will."
That was as close as I could come to picturing the result of a mind-computer interface, and now, in connection with this new book, I can't help thinking of it further.
I suppose that the first time human beings learned how to form an interface between the human mind and another sort of intelligence was when they tamed the horse and learned how to use it as a form of transportation. This reached its highest point when human beings rode horses directly, and when a pull at a rein, the touch of a spur, a squeeze of the knees, or just a cry, could make the horse react in accordance with the human will.
It is no wonder that primitive Greeks seeing horsemen invade the comparatively broad Thessalian plains (the part of Greece most suitable to horsemanship) thought they were seeing a single animal with a human torso and a horse's body. Thus was invented the centaur.
Again, there are "trick drivers." There are expert "stunt men" who can make an automobile do marvelous things. One might expect that a New Guinea native who had never seen or heard of an automobile before might believe that such stunts were being carried through by a strange and Monstrous living organism that had, as part of its structure, a portion with a human appearance within its stomach.
But a person plus a horse is but an imperfect fusion of intelligence, and a person plus an automobile is but an extension of human muscles by mechanical linkages. A horse can easily disobey signals, or even run away in uncontrollable panic. And an automobile can break down or skid at an inconvenient moment.
The fusion of human and computer, however, ought to be a much closer approach to the ideal. It may be an extension of the mind itself as I tried to make plain in Foundation's Edge, a multiplication and intensification of sense-perception, an incredible extension of the will.
Under such circumstances, might not the fusion represent, in a very real sense, a single organism, a kind of cybernetic "centaur"? And once such a union is established, would the human fraction wish to break it? Would he not feel such a break to be an unbearable loss and be unable to live with the impoverishment of mind and will he would then have to face? In my novel, Golan Trevize could break away from the computer at will and suffered no ill effects as a result, but perhaps that is not realistic.
Another issue that appears now and then in the Robot City series concerns the interaction of robot and robot.
This has not played a part in most of my stories, simply because I generally had a single robot character of importance in any given story and I dealt entirely with the matter of the interaction between that single robot and various human beings.
Consider robots in combination.
The First Law states that a robot cannot injure a human being or, through inaction, allow a human being to come to harm.
But suppose two robots are involved, and that one of them, through inadvertence, lack of knowledge, or special circumstances, is engaged in a course of action (quite innocently) that will clearly injure a human being-and suppose the second robot, with greater knowledge or insight, is aware of this. Would he not be required by the First Law to stop the first robot from committing the injury? If there were no other way, would he not be required by the First Law to destroy the first robot without hesitation or regret?
Thus, in my book Robots and Empire (Doubleday, 1985), a robot is introduced to whom human beings have been defined as those speaking with a certain accent. The heroine of the book does not speak with that accent and therefore the robot feels free to kill her. That robot is promptly destroyed by a second robot.
The situation is similar for the Second Law, in which robots are forced to obey orders given them by human beings provided those orders do not violate the First Law.
If, of two robots, one through inadvertence or lack of understanding does not obey an order, the second must either carry through the order itself, or force the first to do so.
Thus, in an intense scene in Robots and Empire, the villainess gives one robot a direct order. The robot hesitates because the order may cause harm to the heroine. For a while, then, there is a confrontation in which the villainess reinforces her own order while a second robot tries to reason the first robot into a greater realization of the harm that will be done to the heroine. Here we have a case where one robot urges another to obey the Second Law in a truer manner, and to withstand a human being in so doing.
It is the Third Law, however, that brings up the knottiest problem where robots in combination are concerned.
The Third Law states that a robot must protect its own existence, where that is consistent with the First and Second Laws.
But what if two robots are concerned? Is each merely concerned with its own existence, as a literal reading of the Third Law would make it seem? Or would each robot feel the need for helping the other maintain its own existence?
As I said, this problem never arose with me as long as I dealt with only one robot per story. (Sometimes there were other robots but they were distinctly subsidiary characters-merely spear-carriers, so to speak.)
However, first in The Robots of Dawn (Doubleday, 1983), and then in its sequel Robots and Empire, I had two robots of equal importance. One of these was R. Daneel Olivaw, a humaniform robot (who could not easily be told from a human being) who had earlier appeared in The Caves of Steel (Ooubleday, 1954), and in its sequel, The Naked Sun (Ooubleday, 1957). The other was R. Giskard Reventlov, who had a more orthodox metallic appearance. Both robots were advanced to the point where their minds were of human complexity.
It was these two robots who were engaged in the struggle with the villainess, the Lady Vasilia. It was Giskard who (such were the exigencies of the plot) was being ordered by Vasilia to leave the service of Gladia (the heroine) and enter her own. And it was Daneel who tenaciously argued the point that Giskard ought to remain with Gladia. Giskard has the ability to exert a limited mental control over human beings, and Daneel points out that Vasilia ought to be controlled for Gladia's safety. He even argues the good of humanity in the abstract ("the Zeroth Law") in favor of such an action.
Daneel's arguments weaken the effect of Vasilia's orders, but not sufficiently. Giskard is made to hesitate, but cannot be forced to take action.
Vasilia, however, decides that Daneel is too dangerous; if he continues to argue, he might force Giskard his way. She therefore orders her own robots to inactivate Daneel and further orders Daneel not to resist. Daneel must obey the order and Vasilia's robots advance to the task.
It is then that Giskard acts. Her four robots are inactivated and Vasilia herself crumples into a forgetful sleep. Later Daneel asks Giskard to explain what happened.
Giskard says, "When she ordered the robots to dismantle you, friend Daneel, and showed a clear emotion of pleasure at the prospect, your need, added to what the concept of the Zeroth Law had already done, superseded the Second Law and rivaled the First Law. It was the combination of the Zeroth Law, psychohistory, my loyalty to Lady Gladia, and your need that dictated my action."
Daneel now argues that his own need (he being merely a robot) ought not to have influenced Giskard at all. Giskard obviously agrees, yet he says:
"It is a strange thing, friend Daneel. I do not know how it came about...At the moment when the robots advanced toward you and Lady Vasilia expressed her savage pleasure, my positronic pathway pattern re-formed in an anomalous fashion. For a moment, I thought of you-as a human being-and I reacted accordingly."
Daneel said, "That was wrong."
Giskard said, "I know that. And yet-and yet, if it were to happen again, I believe the same anomalous change would take place again."
And Daneel cannot help but feel that if the situation were reversed, he, too, would act in the same way.
In other words, the robots had reached a stage of complexity where they had begun to lose the distinction between robots and human beings, where they could see each other as "friends," and have the urge to save each other's existence.