The Philosopher's Eye

News and brain candy for the philosophy community

Caring Robots

90px-2008_Taipei_IT_Month_Day2_Taipei_City_Government_Intelligent_Housekeeping_RobotBack in 1966 Joseph Weizenbaum created “ELIZA”, a relatively simple computer program which was meant to simulate a psychotherapist. The program worked largely by rephrasing a patient’s statements as questions which were then posed back to the patient. Many subjects reported preferring ELIZA to their human therapists, and some continued to value ELIZA’s therapy even after Wiezenbaum revealed ELIZA’s workings. (You can read a transcript of ELIZA in action here.)

Things have moved on somewhat since ELIZA’s day. Maja Matarić, a Professor of Computer Science at the University of Southern California, has developed Robots that can provide advice and therapy to patients who have suffered strokes, or who suffer from Alzheimer’s. The Robot can monitor the patient’s movement as they perform a regime of physical therapy, using a combination of laser scanners and cameras, and provide encouragement and advice. But even more impressively, the robot can monitor how introverted or extroverted the patient is, and tailor the tone of their advice giving accordingly. One stroke patient reported much preferring the robot’s advice and encouragement to that of her husband . . .

Similar robots have also been developed as learning aids for autistic children, who seem to prefer the stable repetitive demonstrations and advice that a robot can provide to the subtly varying instructions of a human. But quite apart from the familiar philosophical issues that such increasingly sophisticated robots raise concerning whether a machine can think and feel, these “caring” robots also raise ethical issues. Sherry Turkle, a professor at MIT, is concerned by the risk such machines will pose to “the most vulnerable populations—children and elders”. Turkle warns: “The paradox is that you can get more attachment with less, so the more simple robots can pose even greater dangers.”

Interestingly, roboticists like Matarić are now confronting what has become known as the “uncanny valley” effect. This is the problem that as a robot becomes increasingly human-like, empathy and familiarity can turn to unease and disgust – an idea that goes back (at least) to Freud’s 1919 essay “Das Unheimliche” (“The Uncanny”).

You can read about Matarić’s work, and about Turkle’s worries, here.

Related articles:$1.99 - small Two Dogmas of Neo-Empiricism By Edouard Machery, University of Pittsburgh (June 2006)
Philosophy Compass

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 480 other followers

%d bloggers like this: