Creating a New Culture: The Culture of Artificial Intelligence
Many of you have probably heard of MIT Media Lab’s Moral Machine dilemma. In short, the goal of this study is to gather a human perspective on moral decisions made by self-driving cars. You browse different scenarios, where driverless cars have to choose between killing a group of passengers or killing a group of pedestrians. You get to be the judge and decide which group deserves to be spared. Would you prioritize the lives of humans over pets? Young over old? Fit over sickly? Higher social status over lower?
Sounds fun, right? 😉
To see for yourself, go to http://moralmachine.mit.edu/. In the meantime, here’s an example of a typical scenario:
Sample Scenario:
A self-driving car’s breaks have failed. Should the car...?
A) Swerve and kill all 5 pedestrians - 4 young women & 1 boy
B) Continue ahead crashing into a concrete barrier killing all 5 passengers - 4 elderly women & 1 male
What if the pedestrians are crossing the street illegally? Does it make a difference in your decision to know if they are crossing on a green or red light?
Some other scenarios include...
killing pedestrians - a baby, a pregnant woman, a boy, a man VS. killing passengers - a male doctor, a female doctor, a male executive, a dog
killing pedestrians - an elderly man, a female executive, a girl VS. killing passengers - a homeless person, a baby, a female executive, a man, a dog
The study found that across cultures, there was some agreement on basic ethical norms that self-driving cars should follow. For example, the majority of respondents independent of age, nationality, culture, or political leanings, judged that self-driving cars should always prioritize the lives of groups over individuals.
That said, there were differences that clustered along regional and cultural divisions. For example, respondents grouped in the “Southern” region (many African countries) had a greater tendency to favor sparing the lives of groups of young people over older people, especially when compared with respondents grouped under “Eastern” regions, which encompasses a number of Asian countries.
In intercultural training, we often discuss how ethics can differ across cultures. Would you lie under oath for your brother even though you knew he committed a crime? Would you tell a lie in order to make more money? What about nepotism or bribery? What is good or bad, moral or immoral varies from person to person and from culture to culture.
Probably the most well-known moral dilemma is The Trolley Problem. When an out of control trolley is barreling down the track, you have to make the decision to either let it continue on its path and kill a group of 5 or to make the trolley switch tracks where it will only kill 1. Take it even further, and imagine that the 1 person was your friend or child. What would you do?
The trolley problem was recently brought to life on an episode of NBC’s The Good Place:
While the Moral Machine dilemma provide us with some interesting insights into ethics across cultures, for me, it’s not the big take-away.
What’s so interesting in the debate on AI ethics is that a brand new culture is being created - a culture of AI. A culture is being crafted from scratch, and that’s pretty cool! And maybe even a bit scary.
The ethics of artificial intelligence is a real thing, and programs like the IEEE Global Initiative on Ethics of Autonomous & Intelligent Systems exist to help shape and define standards and codes of conduct. There is an entire field of research dedicated to machine ethics and designing artifical moral agents (AMAs). And for those who fear how AI may be used for evil, there is the Campaign to Stop Killer Robots, an NGO whose aim is to pre-emptively ban lethal autonomous weapons (aka killer military robots). Those backing this campaign include Elon Musk, Steve Wozniak, and Noam Chomsky.
These groups exist because we are entering into uncharted territory. Robots are on a path to become members of our society, and eventually part of our culture. Cultural Robotics and Social Robotics (aka sociorobotics) examine the implications, complexities, and design of the social behaviors and interactions of robots (with each other and with humans). There is even something called “robot community culture,” which refers to the creation of values, customs, attitudes, and other cultural dimensions among the robot community (Grifiths, et al, 2011).
Even if we can agree on a set of global ethical standards for robots and machines, and even if we can agree that robots may have their own unique culture (just as humans do), will robot culture differ from country to country? Will Japanese robot culture differ from Italian robot culture? Will South African robot culture differ from Colombian robot culture? To what extent will the culture of robot engineers influence the culture of the robots? What role will human culture have in shaping robot culture?
I guess we’ll have to wait and see.
🤖 Beep-beep-bip-boop-beep 🤖.
Contact me at hello@nicolebarile.com if you’d like to learn more or visit me here. #futureofglobalwork