An AI-free zone?
Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at 糖心传媒. Views are her own.
Last year, at the Consumer Electronics Show, Mattel proudly presented a product that was yet to be delivered: Aristotle. It would be an ; it would start out as a baby monitor, but one also capable of playing soothing sounds and songs for babies; later, it would play games with and read stories to the toddlers; as the children grew, its functionality would change to answering questions and helping them with their homework in other ways鈥 was swift鈥攁nd negative.
By October, Mattel . (鈥淎ristoddle,鈥 suggested journalist Will Oremus in response to a tweet complaining about the device name.) 鈥淢y main concerns about this technology,鈥 , a pediatrician, 鈥渋s the idea that a piece of technology becomes the most responsive household member to a crying child, a child who wants to learn, or a child鈥檚 play ideas.鈥
Many children are already using internet-connected AI-powered digital assistants, though, and even if those assistants are not designed specifically for kids, some of their features are. Earlier this month, Wired magazine reported that, in response to parents worrying that their children are learning to order 鈥渂ots鈥 around and might start to do that to humans, too,
Amazon and Google both announced this week that their voice assistants can now encourage kids to punctuate their requests with 鈥榩lease.鈥 The version of Alexa that inhabits the new Kids Edition will thank children for 鈥榓sking so nicely.鈥 Google Assistant's forthcoming Pretty Please feature will remind kids to 鈥榮ay the magic word鈥 before complying with their wishes.
Nothing wrong with politeness, of course (though enforced politeness that turns 鈥減lease鈥 into another 鈥淥K, Google鈥 might not be the most educational thing). But are we slipping back toward Mattel鈥檚 Aristotle? Wired quotes John Havens, executive director of the the : "I think it's reasonable to ask if parenting will become a skill that, , is better performed by a machine.鈥
Is it? And what I mean is not 鈥渋s it reasonable,鈥 but 鈥渋s it such a skill鈥?
I don鈥檛 think so. Certainly not at its best. And what about other circumstances? Would an AI 鈥減arent鈥 be better than no parent, for kids in that position? Would it be different if the technology were there not to allow frazzled busy parents to ignore their kids鈥 needs while they do non-parenting-related things, but to fill a void? The mind boggles. Would the benefits outweigh the harms?
Do we just need to designate some areas of human life as AI-free zones?
Last October, the ethicist David O鈥橦ara, who teaches philosophy of religion, wrote a blog post in response to ethicist Evan Selinger, who had asked whether there are some jobs that would be unethical to automate. The blog post is titled In it, he mentions a company that has created a robot that cites biblical verses and offers blessings in multiple languages. O鈥橦ara then riffs on the theme:
鈥 can a meaningful confession be heard by someone who cannot sin鈥? Can a machine be a member of a church, or does it have something more like the status of a chalice or a chasuble 鈥 something the community uses liturgically but that does not have standing in the deliberations and practices of the community? Another important question: can a machine act as a vicar? That is, can a machine stand in as a representative of God and proclaim the forgiveness of God as we believe those who have been ordained may do?
And what about the writing of poetry? :
My concerns here are twofold: one has to do with the danger of persuasion: not much moves us as powerfully as poetry does. My second concern is about the importance of having our arts be the expressions of the heart of our communities. But I could be wrong: maybe robots should be writing poetry 鈥 their own poetry, from one machine to another.
O鈥橦ara concludes that 鈥淲e should use the technologies we have to serve those in need. 鈥 But we should not pretend that in so doing we have done all that we must do.鈥
So is the answer, then, that we need not 鈥淎I-free zones鈥 but careful consideration, within each area in which human beings interact with each other, about which aspects of those interactions could be usefully automated without losing too much, and which aspects should not, ethically, be automated?
Shakespeare wrote of lunatics, lovers, and poets. Love cannot be conveyed through AI-powered machines; whether some day it might is very much open for debate and not directly relevant to the question at hand: what should we not hand over to AI, now? The singing of lullabies does more than soothe a crying baby; it does something to and for the singer, too, and for the relationship that builds between them. What are the relationships that Go or chess cannot begin to approximate, and that we should be very careful not to damage or delete?
Photo by tua ulamac, cropped, used under a .