Comments on: Humanizing machines https://rose.geog.mcgill.ca/wordpress/?p=21 Sat, 04 Apr 2009 01:22:49 +0000 hourly 1 https://wordpress.org/?v=5.8.10 By: backgammon https://rose.geog.mcgill.ca/wordpress/?p=21&cpage=1#comment-65863 Sat, 04 Apr 2009 01:22:49 +0000 /?p=21#comment-65863 Always good quality info from this site!

]]>
By: Liam https://rose.geog.mcgill.ca/wordpress/?p=21&cpage=1#comment-25 Thu, 13 Jan 2005 23:51:03 +0000 /?p=21#comment-25 My views build in many ways on what Pete says.

If in fact our minds and thoughts can be traced down to the neuron, and interconnections between them, I see no reason an equivilent system couldn’t in theory be artificially constructed. Creating and applying a moral code doesn’t seem on the whole different from a large number of our cognitive tasks, we use our minds, and what we’ve learned or experienced to apply our ‘moral’ code to how we act. I’ll admit it strikes me as almost unromantic to think we’re just a product of our neurons firing, and a large part of me rebels at thinking of our actions and why we do things as so deterministically.

Back to the point(ish), I took an very interesting sociology course where we examined how ‘normal’ people could violate what were supposed to be commonly held moral values in committing war crimes. It was really very depressing, some experiments had some results where a substantial majority of people would violate what many people (including me) would like to believe are at least universal moral values under a certain amount of sociological pressure.

It seems to me humans in general doing a pretty poor job of implementing their own moral standards (if they even have them!), so I think a fairer question to ask might be, could we ever have an artificial robot with morality approximately as good (or better) as ours. And if it does have a physical root we can understand, I don’t see why would couldn’t artificially construct it.

]]>
By: Pete Barry https://rose.geog.mcgill.ca/wordpress/?p=21&cpage=1#comment-22 Thu, 13 Jan 2005 20:44:35 +0000 /?p=21#comment-22 I am not in this course, but may make comments from time to time. I am the MSE Program Coordinator as well as the IT Support Slave, so I am interested in both computers and environment (as well as what our students are thinking). However, my academic background tends towards rocks, mud, and ice, so I may not have all that much to add.

I have read that ethics and moral judgement in humans resides in a specific part of the brain. When this is damaged through injury or stroke, victims tend to display amoral behaviour (not immoral), and a total lack of empathy towards others. (Sources escape me, sorry.)

If ethics relys on specific neural circuitry, perhaps it can be produced artificially.

On the other hand, empathy implies identifying with the feelings of others. In order to understand the concept of “other”, one needs to understand the concept of “self”, implying self-awareness. Could it be that in order to create an ethical AI, it would have to be self-aware? Would the reverse be true? Could a self-aware (and rational) AI be amoral?

]]>
By: Ira https://rose.geog.mcgill.ca/wordpress/?p=21&cpage=1#comment-21 Thu, 13 Jan 2005 19:17:14 +0000 /?p=21#comment-21 It seems to me that when a machine ‘learns’ something, it’s just adding a new set of rules or instructions its program. I agree that the capability of machines to learn more and more complex sets of instructions will be enhanced over time and I also agree that they will be able to learn things on their own eventually – BUT I do not think that means they will be able to learn ethics, at least not ethics the way I think about them. For me, ethics is not simply a set of rules, right and wrong. Ethics are a personal and individual thing, always being redefined and always shifting depending on the circumstances. I don’t think a machine could ever make a spur of the moment “gut” decision based on the circumstances and their own emotional response the way that humans do.

]]>
By: Jean-Sebastien https://rose.geog.mcgill.ca/wordpress/?p=21&cpage=1#comment-20 Thu, 13 Jan 2005 17:27:52 +0000 /?p=21#comment-20 I don’t fully agree with you. I don’t see why a machine couldn’t learn an ethic just as human can learn one and learn to differentiate what is good from bad. Maybe machines cannot be compassionate like humans, but they can take decisions based on previous experience as humans do. What machine learning technologies can do is still limited, but it will just increase as the algorithm are improved and the computational power increase. One day I don’t see why a computer couldn’t learn a complex ethic and apply it.
To see what machine learning can do at the moment (I think it’s already impressive) one of the classic success story in the field is TD-Gammon, a program that has learned to play Backgammon as well as the best human players in the world.
TD-Gammon

]]>