Humanizing machines

I guess this relates to the discussion last class regarding the Turing test, and artificial intelligence. I have no doubt that eventually, the machine would be able to tell which is the woman and which is the man, based on deductive logic and process of elimination…but I don’t think machines or robots will ever have ethics. They will never be able to think whether they do something is good or bad, just or unjust. In the book by Hannah Arendt “On Violence”, she takes out the term violence on its own, separate from power, authority, force, and strength, and believes that violence requires implements, such as tools and also, violence has the ability to overwhelm the outcome, so that undesireable results occur. Violence does not make one powerful, but rather violence is an expression of power. Violence requires power, and in order to perpetrate violence, one needs a group. But computers overcome tihs, by allowing one person to perpetrate a large amount of violence on their own. “No government exclusively based on the means of violence has ever existed. Even the totalitarian ruler, whose chief instrument of rule is torture, needs a power basis – the secret police and its net of informers. Only the development of robot soldiers, which, as previously mentioned, would eliminate the human factor completely and, conceivably, permit one man with a push button to destroy whomever he pleased, could change this fundamental ascendancy of power over violence” (Arendt 50). Technology may be more of a liability than an asset, in some cases…what are your thoughts on this? Has anyone seen iRobot?

5 Responses to “Humanizing machines”

  1. Jean-Sebastien says:

    I don’t fully agree with you. I don’t see why a machine couldn’t learn an ethic just as human can learn one and learn to differentiate what is good from bad. Maybe machines cannot be compassionate like humans, but they can take decisions based on previous experience as humans do. What machine learning technologies can do is still limited, but it will just increase as the algorithm are improved and the computational power increase. One day I don’t see why a computer couldn’t learn a complex ethic and apply it.
    To see what machine learning can do at the moment (I think it’s already impressive) one of the classic success story in the field is TD-Gammon, a program that has learned to play Backgammon as well as the best human players in the world.

  2. Ira says:

    It seems to me that when a machine ‘learns’ something, it’s just adding a new set of rules or instructions its program. I agree that the capability of machines to learn more and more complex sets of instructions will be enhanced over time and I also agree that they will be able to learn things on their own eventually – BUT I do not think that means they will be able to learn ethics, at least not ethics the way I think about them. For me, ethics is not simply a set of rules, right and wrong. Ethics are a personal and individual thing, always being redefined and always shifting depending on the circumstances. I don’t think a machine could ever make a spur of the moment “gut” decision based on the circumstances and their own emotional response the way that humans do.

  3. Pete Barry says:

    I am not in this course, but may make comments from time to time. I am the MSE Program Coordinator as well as the IT Support Slave, so I am interested in both computers and environment (as well as what our students are thinking). However, my academic background tends towards rocks, mud, and ice, so I may not have all that much to add.

    I have read that ethics and moral judgement in humans resides in a specific part of the brain. When this is damaged through injury or stroke, victims tend to display amoral behaviour (not immoral), and a total lack of empathy towards others. (Sources escape me, sorry.)

    If ethics relys on specific neural circuitry, perhaps it can be produced artificially.

    On the other hand, empathy implies identifying with the feelings of others. In order to understand the concept of “other”, one needs to understand the concept of “self”, implying self-awareness. Could it be that in order to create an ethical AI, it would have to be self-aware? Would the reverse be true? Could a self-aware (and rational) AI be amoral?

  4. Liam says:

    My views build in many ways on what Pete says.

    If in fact our minds and thoughts can be traced down to the neuron, and interconnections between them, I see no reason an equivilent system couldn’t in theory be artificially constructed. Creating and applying a moral code doesn’t seem on the whole different from a large number of our cognitive tasks, we use our minds, and what we’ve learned or experienced to apply our ‘moral’ code to how we act. I’ll admit it strikes me as almost unromantic to think we’re just a product of our neurons firing, and a large part of me rebels at thinking of our actions and why we do things as so deterministically.

    Back to the point(ish), I took an very interesting sociology course where we examined how ‘normal’ people could violate what were supposed to be commonly held moral values in committing war crimes. It was really very depressing, some experiments had some results where a substantial majority of people would violate what many people (including me) would like to believe are at least universal moral values under a certain amount of sociological pressure.

    It seems to me humans in general doing a pretty poor job of implementing their own moral standards (if they even have them!), so I think a fairer question to ask might be, could we ever have an artificial robot with morality approximately as good (or better) as ours. And if it does have a physical root we can understand, I don’t see why would couldn’t artificially construct it.

  5. backgammon says:

    Always good quality info from this site!