Teaching Robots Right from Wrong?

Teaching Robots Right from Wrong?

Dec 7, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Vyacheslav Polonski and Jane Zavalishina

Curated by Helena M. Herrero Lamuedra

Today, it is difficult to imagine a technology that is as enthralling and terrifying as machine learning. While media coverage and research papers consistently tout the potential of machine learning to become the biggest driver of positive change in business and society, the lingering question on everyone’s mind is: “Well, what if it all goes terribly wrong?”

For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. Alarmist views on the terrifying potential of general AI abound in the media.

More often than not, these dystopian prophecies have been met with calls for a more ethical implementation of AI systems; that somehow engineers should imbue autonomous systems with a sense of ethics. According to some AI experts, we can teach our future robot overlords to tell right from wrong, akin to a “good Samaritan AI” that will always act justly on its own and help humans in distress.

Although this future is still decades away, today there is much uncertainty as to how, if at all, we will reach this level of general machine intelligence. But what is more crucial, at the moment, is that even the narrow AI applications that exist today require our urgent attention in the ways in which they are making moral decisions in practical day-to-day situations. For example, this is relevant when algorithms make decisions about who gets access to loans or when self-driving cars have to calculate the value of a human life in hazardous situations.

Teaching morality to machines is hard because humans can’t objectively convey morality in measurable metrics that make it easy for a computer to process. In fact, it is even questionable whether we, as humans have a sound understanding of morality at all that we can all agree on. In moral dilemmas, humans tend to rely on gut feeling instead of elaborate cost-benefit calculations. Machines, on the other hand, need explicit and objective metrics that can be clearly measured and optimized. For example, an AI player can excel in games with clear rules and boundaries by learning how to optimize the score through repeated playthroughs.

After its experiments with deep reinforcement learning on Atari video games, Alphabet’s DeepMind was able to beat the best human players of Go. Meanwhile, OpenAI amassed “lifetimes” of experiences to beat the best human players at the Valve Dota 2 tournament, one of the most popular e-sports competitions globally.

But in real-life situations, optimization problems are vastly more complex. For example, how do you teach a machine to algorithmically maximize fairness or to overcome racial and gender biases in its training data? A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.

This has led some authors to worry that a naive application of algorithms to everyday problems could amplify structural discrimination and reproduce biases in the data they are based on. In the worst case, algorithms could deny services to minorities, impede people’s employment opportunities or get the wrong political candidate elected.

Based on our experiences in machine learning, we believe there are three ways to begin designing more ethically aligned machines:

1. Define ethical behavior

AI researchers and ethicists need to formulate ethical values as quantifiable parameters. In other words, they need to provide machines with explicit answers and decision rules to any potential ethical dilemmas it might encounter. This would require that humans agree among themselves on the most ethical course of action in any given situation – a challenging but not impossible task. For example, Germany’s Ethics Commission on Automated and Connected Driving has recommended to specifically programme ethical values into self-driving cars to prioritize the protection of human life above all else. In the event of an unavoidable accident, the car should be “prohibited to offset victims against one another”. In other words, a car shouldn’t be able to choose whether to kill one person based on individual features, such as age, gender or physical/mental constitution when a crash is inescapable.

2. Crowdsource our morality

Engineers need to collect enough data on explicit ethical measures to appropriately train AI algorithms. Even after we have defined specific metrics for our ethical values, an AI system might still struggle to pick it up if there is not enough unbiased data to train the models. Getting appropriate data is challenging, because ethical norms cannot be always clearly standardized. Different situations require different ethical approaches, and in some situations there may not be a single ethical course of action at all – just think about lethal autonomous weapons that are currently being developed for military applications. One way of solving this would be to crowdsource potential solutions to moral dilemmas from millions of humans. For instance, MIT’s Moral Machine project shows how crowdsourced data can be used to effectively train machines to make better moral decisions in the context of self-driving cars.

3. Make AI transparent

Policymakers need to implement guidelines that make AI decisions with respect to ethics more transparent, especially with regard to ethical metrics and outcomes. If AI systems make mistakes or have undesired consequences, we cannot accept “the algorithm did it” as an adequate excuse. But we also know that demanding full algorithmic transparency is technically untenable (and, quite frankly, not very useful). Neural networks are simply too complex to be scrutinized by human inspectors. Instead, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. For self-driving cars, for instance, this could imply that detailed logs of all automated decisions are kept at all times to ensure their ethical accountability.

We believe that these three recommendations should be seen as a starting point for developing ethically aligned AI systems. Failing to imbue ethics into AI systems, we may be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. For example, in an unavoidable accident situation, self-driving cars will need to make some decision for better or worse. But if the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm. This means that we cannot simply refuse to quantify our values. By walking away from this critical ethical discussion, we are making an implicit moral choice. And as machine intelligence becomes increasingly pervasive in society, the price of inaction could be enormous – it could negatively affect the lives of billions of people.

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimized. For AI engineers, this may seem like a daunting task. After all, defining moral values is a challenge mankind has struggled with throughout its history. Nevertheless, the state of AI research requires us to finally define morality and to quantify it in explicit terms. Engineers cannot build a “good samaritan AI”, as long as they lack a formula for the good samaritan human.

Why Tomorrow’s Customers Won’t Shop at Today’s Retailers

Why Tomorrow’s Customers Won’t Shop at Today’s Retailers

Oct 18, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Dan Clay

Curated by Helena M. Herrero Lamuedra

Meet Dawn. Her T-shirt is connected to the internet, and her tattoo unlocks her car door. She’s never gone shopping, but she gets a package on her doorstep every week. She’s never been lost or late, and she’s never once waited in line. She never goes anywhere without visiting in VR first, and she doesn’t buy anything that wasn’t made just for her.

Dawn is an average 25-year-old in the not-so-distant future. She craves mobility, flexibility, and uniqueness; she spends more on experience than she does on products; she demands speed, transparency, and control; and she has enough choice to avoid any company that doesn’t give her what she wants. We’re in the midst of remarkable change not seen since the Industrial Revolution, and a noticeable gap is growing between what Dawn wants and what traditional retailers provide.

In 2005 Amazon launched free two-day shipping. In 2014 it launched free two-hour shipping. It’s hard to get faster than “Now,” and once immediacy becomes table stakes, competition will move to prediction. By intelligently applying data from our connected devices, smart digital assistants will be able to deliver products before we even acknowledge the need: Imagine a pharmacy that knows you’re about to get sick; an electronics retailer that knows you forgot your charger; an online merchant that knows you’re out of toilet paper; and a subscription service that knows you have a wedding coming up, have a little extra in your bank account, and that you look good in blue. Near-perfect predictions are the future of retail, and it’s up to CX and UX designers to ensure that they are greeted as miraculous time-savers rather than creepy intrusions.

Every product is personalized

While consumers are increasingly wary about how much of their personal data is being tracked, they’re also increasingly willing to trade privacy for more tangible benefits. It then falls on companies to ensure those benefits justify the exchange. In the retail space this increasingly means perfectly tailored products and a more personally relevant experience. Etsy recently acquired an AI startup to make its search experience more relevant and tailored. HelloAva provides customers with personalized skincare product recommendations based on machine learning combined with a few texts and a selfie. Amazon, constantly at the forefront of customer needs, recently acquired a patent for a custom clothing manufacturing system.

Market to the machines

Dawn, our customer of the future, won’t need to customize all of her purchases; for many of her needs, she’ll give her intelligent, IoT-enabled agent (think Alexa with a master’s degree) personalized filters so the agent can buy on her behalf. When Siri is choosing which shoes to rent, the robot almost becomes the customer, and retailers must win over smart AI assistants before they even reach end customers. Netflix already has a team of people working on this new realm of marketing to machines. As CEO Reed Hastings quipped at this year’s Mobile World Congress, “I’m not sure if in 20 to 50 years we are going to be entertaining you, or entertaining AIs.”

Branded, immersive experiences matter more than ever

As online shopping and automation increase, physical retail spaces will have to deliver much more than just a good shopping experience to compel people to visit. This could be through added education (like the expert stylists at Nordstrom’s store without any merchandise) or heightened service personalization (like Asics on-site 3D foot mapping and gait cycle analysis) or constantly evolving entertainment (like Gentle Monster’s Seoul flagship store’s monthly changing “exhibition“).

In this context, brand is becoming more than a value proposition or signifier—it’s the essential ingredient preventing companies from becoming commoditized by an on-demand, automated world where your car picks its own motor oil. Brands have a vital responsibility to create a community for customers to belong to and believe in.

A mobile world that feels like a single channel experience

Dawn will be increasingly mobile, and she’ll expect retailers to move along with her. She may research dresses on her phone and expect the store associate to know what she’s looked at. It’s no secret that mobile shopping is continuing to grow, but retailers need to think less about developing separate strategies for their channels and more about maintaining a continuous flow with the one channel that matters: the customer channel.

WeChat, for example, China’s largest social media channel, is used for everything from online shopping and paying at supermarkets to ordering a taxi and getting flight updates, creating a seamless “single channel” experience across all interactions. Snapchat’s new Context Cards, allowing users to read location-based reviews, business information and hail rides all within the app, builds towards a similar, single channel experience.

The future promises profound change. Yet perhaps the most pressing challenge for retailers is keeping up with customers’ expectations for immediacy, personalization, innovative experiences, and the other myriad ways technological and societal changes are making Dawn the most demanding customer the retail industry has ever seen. The future is daunting, but it’s also full of opportunity, and the retailers that can anticipate the needs of the customer of the future are well-poised for success in the years to come.