Artificial Intelligence and Ethics

Artificial Intelligence and Ethics

#AIforgood #ethics #leadership #digitaldisruption #digitaltransformation

Enterprises must confront the ethical implications of AI use as they increasingly roll out technology that has the potential to reshape how humans interact with machines

Many enterprises are exploring how AI can help move their business forward, save time and money, and provide more value to all their stakeholders. However, most companies are missing the conversation about the ethical issues of AI use and adoption.
Even at this early stage of AI adoption, it’s important for enterprises to take ethical and responsible approaches when creating AI systems because the industry is already starting to see backlash against AI implementations that play loose with ethical concerns.
For example, Google recently saw pushback with its Google Duplex release that seems to show AI-enabled systems pretending to be humans. Microsoft saw significant issues with its Tay bot that started going off the rails. And, of course, who can ignore what Elon Musk and others are saying about the use of AI.
Yet enterprises are already starting to pay attention to the ethical issues of AI use. Microsoft, for example, has created the AI and Ethics in Engineering and Research Committee to make sure the company’s core values are included in the AI systems it creates.

How AI systems can be biased

AI systems can quickly find themselves in ethical trouble when left inadequately supervised. One notable example was Google’s image recognition tool mistakenly classifying black people as gorillas, and the aforementioned Tay chatbot becoming a racist, sexist bigot.
How could this happen? Plainly put, AI systems are only as good as their training data, and that training data has bias. Just like humans, AI systems need to be fed data and told what that data is in order to learn from it.
What happens when you feed biased training data to a machine is predictable: biased results. Bias in AI systems often stems from inherent human bias. When technologists build systems around their own experience — even when Silicon Valley has a notable diversity problem — or when they use training data that has had human bias involved historically, the data tends to reflect the lack of diversity or systemic bias.
Some of these AI technologies can have ethical implications.
Because of this, systems inherit this bias and start to erode the trust of users. Companies are starting to realize that if they plan to gain adoption of their AI systems and realize ROI, those AI systems must be trustworthy. Without trust, they won’t be used, and then the AI investment will be a waste.
Companies are combating inherent data bias by implementing programs to not only broaden the diversity of their data sets, but also the diversity of their teams. More diversity on teams enables a diverse group of people to feed systems different data points from which to learn. Organizations like AI4ALL are helping enterprises meet both of these anti-bias goals.

More human-like bots raise stakes for ethical AI use

At Google’s I/O event earlier this month, the company demoed Google Duplex, an experimental Google voice assistant that was shown via a prerecorded interaction of the system placing a phone call to a hair salon on a human agent’s behalf. The system did a reasonable enough job impersonating a human, even adding umms and mm-hmms, that the human on the other side was suitably fooled into thinking she was talking to another human.
This demo raised a number of significant and legitimate ethical issues of AI use. Why did the Duplex system try to fake being human? Why didn’t it just identity itself as a bot upfront? Is it OK to fool humans into thinking they’re talking to other humans?
Putting bots like this out into the real world where they pretend to be human, or even pretend to take over the identity of an actual human, can be a big problem. Humans don’t like being fooled. There’s already significant erosion in trust in online systems with people starting to not believe what they read, see or hear.
With bots like Duplex on the loose, people will soon stop believing anyone or anything they interact with via phone. People want to know who they are talking to. They seem to be fine with talking to humans or bots as long as the other party truthfully identifies itself.

Ethical AI is needed for broad AI adoption

Many in the industry are pursuing the creation of a code of ethics for bots to address potential issues, malicious or benign, that could arise, and to help us address them now before it’s too late. This code of ethics wouldn’t just address legitimate uses of bot technology, but also intentionally malicious uses of voice bots.
Imagine a malicious bot user instructing the tool to ask a parent to pick up their sick child at school in order to get them out of their house so a criminal can come in while they aren’t home and rob them. Bot calls from competing restaurants could make fake reservations, preventing actual customers from getting tables.
Also concerning are information disclosure issues and laws that are not up to date to deal with voice bots. For example, does it violate HIPAA laws for bots to call your doctor’s office to make an appointment and ask for medical information over the phone?
Forward-thinking companies see the need to create AI systems that address ethics and bias issues, and are taking active measures now. These enterprises have learned from previous cybersecurity issues that addressing trust-related concerns as an afterthought comes at a significant risk. As such, they are investing time and effort to address ethics concerns now before trust in AI systems is eroded to the point of no return. Other businesses should do so, too.

Artificial Intelligence for Social Good

Artificial Intelligence for Social Good

 #AIforgood #digitaltransformation #techdisruption #sustainabledevelopmentgoals

AI is not a silver bullet, but it could help tackle some of the world’s most challenging social problems.

Artificial intelligence (AI) has the potential to help tackle some of the world’s most challenging social problems. To analyze potential applications for social good, we compiled a library of about 160 AI social-impact use cases. They suggest that existing capabilities could contribute to tackling cases across all 17 of the UN’s sustainable-development goals, potentially helping hundreds of millions of people in both advanced and emerging countries.
Real-life examples of AI are already being applied in about one-third of these use cases, albeit in relatively small tests. They range from diagnosing cancer to helping blind people navigate their surroundings, identifying victims of online sexual exploitation, and aiding disaster-relief efforts (such as the flooding that followed Hurricane Harvey in 2017). AI is only part of a much broader tool kit of measures that can be used to tackle societal issues, however. For now, issues such as data accessibility and shortages of AI talent constrain its application for social good.
 The article is divided into five sections:

First: Mapping AI use cases to domains of social good

For the purposes of this research, we defined AI as deep learning. We grouped use cases into ten social-impact domains based on taxonomies in use among social-sector organizations, such as the AI for Good Foundation and the World Bank. Each use case highlights a type of meaningful problem that can be solved by one or more AI capability. The cost of human suffering, and the value of alleviating it, are impossible to gauge and compare. Nonetheless, employing usage frequency as a proxy, we measure the potential impact of different AI capabilities.
For about one-third of the use cases in our library, we identified an actual AI deployment (Exhibit 1). Since many of these solutions are small test cases to determine feasibility, their functionality and scope of deployment often suggest that additional potential could be captured. For three-quarters of our use cases, we have seen solutions deployed that use some level of advanced analytics; most of these use cases, although not all, would further benefit from the use of AI techniques.

Crisis response

These are specific crisis-related challenges, such as responses to natural and human-made disasters in search and rescue missions, as well as the outbreak of disease. Examples include using AI on satellite data to map and predict the progression of wildfires and thereby optimize the response of firefighters. Drones with AI capabilities can also be used to find missing persons in wilderness areas.

Economic empowerment

With an emphasis on currently vulnerable populations, these domains involve opening access to economic resources and opportunities, including jobs, the development of skills, and market information. For example, AI can be used to detect plant damage early through low-altitude sensors, including smartphones and drones, to improve yields for small farms.

Educational challenges

These include maximizing student achievement and improving teachers’ productivity. For example, adaptive-learning technology could base recommended content to students on past success and engagement with the material.

Environmental challenges

Sustaining biodiversity and combating the depletion of natural resources, pollution, and climate change are challenges in this domain. (See Exhibit 2 for an illustration on how AI can be used to catch wildlife poachers.) The Rainforest Connection, a Bay Area nonprofit, uses AI tools such as Google’s TensorFlow in conservancy efforts across the world. Its platform can detect illegal logging in vulnerable forest areas by analyzing audio-sensor data.

Equality and inclusion

Addressing challenges to equality, inclusion, and self-determination (such as reducing or eliminating bias based on race, sexual orientation, religion, citizenship, and disabilities) are issues in this domain. One use case, based on work by Affectiva, which was spun out of the MIT Media Lab, and Autism Glass, a Stanford research project, involves using AI to automate the recognition of emotions and to provide social cues to help individuals along the autism spectrum interact in social environments.

Health and hunger

This domain addresses health and hunger challenges, including early-stage diagnosis and optimized food distribution. Researchers at the University of Heidelberg and Stanford University have created a disease-detection AI system—using the visual diagnosis of natural images, such as images of skin lesions to determine if they are cancerous—that outperformed professional dermatologists. AI-enabled wearable devices can already detect people with potential early signs of diabetes with 85 percent accuracy by analyzing heart-rate sensor data. These devices, if sufficiently affordable, could help more than 400 million people around the world afflicted by the disease.

Information verification and validation

This domain concerns the challenge of facilitating the provision, validation, and recommendation of helpful, valuable, and reliable information to all. It focuses on filtering or counteracting misleading and distorted content, including false and polarizing information disseminated through the relatively new channels of the internet and social media. Such content can have severely negative consequences, including the manipulation of election results or even mob killings, in India and Mexico, triggered by the dissemination of false news via messaging applications. Use cases in this domain include actively presenting opposing views to ideologically isolated pockets in social media.

Infrastructure management

This domain includes infrastructure challenges that could promote the public good in the categories of energy, water and waste management, transportation, real estate, and urban planning. For example, traffic-light networks can be optimized using real-time traffic camera data and Internet of Things (IoT) sensors to maximize vehicle throughput. AI can also be used to schedule predictive maintenance of public transportation systems, such as trains and public infrastructure (including bridges), to identify potentially malfunctioning components.

Public and social-sector management

Initiatives related to efficiency and the effective management of public- and social-sector entities, including strong institutions, transparency, and financial management, are included in this domain. For example, AI can be used to identify tax fraud using alternative data such as browsing data, retail data, or payments history.

Security and justice

This domain involves challenges in society such as preventing crime and other physical dangers, as well as tracking criminals and mitigating bias in police forces. It focuses on security, policing, and criminal-justice issues as a unique category, rather than as part of public-sector management. An example is using AI and data from IoT devices to create solutions that help firefighters determine safe paths through burning buildings.
The United Nations’ Sustainable Development Goals (SDGs) are among the best-known and most frequently cited societal challenges, and our use cases map to all 17 of the goals, supporting some aspect of each one (Exhibit 3). Our use-case library does not rest on the taxonomy of the SDGs, because their goals, unlike ours, are not directly related to AI usage; about 20 cases in our library do not map to the SDGs at all. The chart should not be read as a comprehensive evaluation of AI’s potential for each SDG; if an SDG has a low number of cases, that reflects our library rather than AI’s applicability to that SDG.

Second: AI capabilities that can be used for social good

We identified 18 AI capabilities that could be used to benefit society. Fourteen of them fall into three major categories: computer vision, natural-language processing, and speech and audio processing. The remaining four, which we treated as stand-alone capabilities, include three AI capabilities: reinforcement learning, content generation, and structured deep learning. We also included a category for analytics techniques.
When we subsequently mapped these capabilities to domains (aggregating use cases) in a heat map, we found some clear patterns

Image classification and object detection are powerful computer-vision capabilities

Within computer vision, the specific capabilities of image classification and object detection stand out for their potential applications for social good. These capabilities are often used together—for example, when drones need computer vision to navigate a complex forest environment for search-and-rescue purposes. In this case, image classification may be used to distinguish normal ground cover from footpaths, thereby guiding the drone’s directional navigation, while object detection helps circumvent obstacles such as trees.
Some of these use cases consist of tasks a human being could potentially accomplish on an individual level, but the required number of instances is so large that it exceeds human capacity (for example, finding flooded or unusable roads across a large area after a hurricane). In other cases, an AI system can be more accurate than humans, often by processing more information (for example, the early identification of plant diseases to prevent infection of the entire crop).
Computer-vision capabilities such as the identification of people, face detection, and emotion recognition are relevant only in select domains and use cases, including crisis response, security, equality, and education, but where they are relevant, their impact is great. In these use cases, the common theme is the need to identify individuals, most easily accomplished through the analysis of images. An example of such a use case would be taking advantage of face detection on surveillance footage to detect the presence of intruders in a specific area. (Face detection applications detect the presence of people in an image or video frame and should not be confused with facial recognition, which is used to identify individuals by their features.)

Natural-language processing

Some aspects of natural-language processing, including sentiment analysis, language translation, and language understanding, also stand out as applicable to a wide range of domains and use cases. Natural-language processing is most useful in domains where information is commonly stored in unstructured textual form, such as incident reports, health records, newspaper articles, and SMS messages.
As with methods based on computer vision, in some cases a human can probably perform a task with greater accuracy than a trained machine-learning model can. Nonetheless, the speed of “good enough” automated systems can enable meaningful scale efficiencies—for example, providing automated answers to questions that citizens may ask through email. In other cases, especially those that require processing and analyzing vast amounts of information quickly, AI models could outperform humans. An illustrative example could include monitoring the outbreak of disease by analyzing tweets sent in multiple local languages.
Some capabilities, or combination of capabilities, can give the target population opportunities that would not otherwise exist, especially for use cases that involve understanding the natural environment through the interpretation of vision, sound, and speech. An example is the use of AI to help educate children who are on the autism spectrum. Although professional therapists have proved effective in creating behavioral-learning plans for children with autism spectrum disorder (ASD), waitlists for therapy are long. AI tools, primarily using emotion recognition and face detection, can increase access to such educational opportunities by providing cues to help children identify and ultimately learn facial expressions among their family members and friends.

Structured deep learning also may have social-benefit applications

A third category of AI capabilities with social-good applications is structured deep learning to analyze traditional tabular data sets. It can help solve problems ranging from tax fraud (using tax-return data) to finding otherwise hard to discover patterns of insights in electronic health records.
Structured deep learning (SDL) has been gaining momentum in the commercial sector in recent years. We expect to see that trend spill over into solutions for social-good use cases, particularly given the abundance of tabular data in the public and social sectors. By automating aspects of basic feature engineering, SDL solutions reduce the need either for domain expertise or an innate understanding of the data and which aspects of the data are important.

Advanced analytics can be a more time- and cost-effective solution than AI for some use cases

Some of the use cases in our library are better suited to traditional analytics techniques, which are easier to create, than to AI. Moreover, for certain tasks, other analytical techniques can be more suitable than deep learning. For example, in cases where there is a premium on explainability, decision tree-based models can often be more easily understood by humans. In Flint, Michigan, machine learning (sometimes referred to as AI, although for this research we defined AI more narrowly as deep learning) is being used to predict houses that may still have lead water pipes

Third: Overcoming bottlenecks, especially for data and talent

While the social impact of AI is potentially very large, certain bottlenecks must be overcome if even some of that potential is to be realized. In all, we identified 18 potential bottlenecks through interviews with social-domain experts and with AI researchers and practitioners. We grouped these bottlenecks in four categories of importance.
The most significant bottlenecks are data accessibility, a shortage of talent to develop AI solutions, and “last-mile” implementation challenges

Data needed for social-impact uses may not be easily accessible

Data accessibility remains a significant challenge. Resolving it will require a willingness, by both private- and public-sector organizations, to make data available. Much of the data essential or useful for social-good applications are in private hands or in public institutions that might not be willing to share their data. These data owners include telecommunications and satellite companies; social-media platforms; financial institutions (for details such as credit histories); hospitals, doctors, and other health providers (medical information); and governments (including tax information for private individuals). Social entrepreneurs and nongovernmental organizations (NGOs) may have difficulty accessing these data sets because of regulations on data use, privacy concerns, and bureaucratic inertia. The data may also have business value and could be commercially available for purchase. Given the challenges of distinguishing between social use and commercial use, the price may be too high for NGOs and others wanting to deploy the data for societal benefits.

The expert AI talent needed to develop and train AI models is in short supply

Just over half of the use cases in our library can leverage solutions created by people with less AI experience. The remaining use cases are more complex as a result of a combination of factors, which vary with the specific case. These need high-level AI expertise—people who may have PhDs or considerable experience with the technologies. Such people are in short supply.
For the first use cases requiring less AI expertise, the needed solution builders are data scientists or software developers with AI experience but not necessarily high-level expertise. Most of these use cases are less complex models that rely on single modes of data input.
The complexity of problems increases significantly when use cases require several AI capabilities to work together cohesively, as well as multiple different data-type inputs. Progress in developing solutions for these cases will thus require high-level talent, for which demand far outstrips supply and competition is fierce.

‘Last-mile’ implementation challenges are also a significant bottleneck for AI deployment for social good

Even when high-level AI expertise is not required, NGOs and other social-sector organizations can face technical problems, over time, deploying and sustaining AI models that require continued access to some level of AI-related skills. The talent required could range from engineers who can maintain or improve the models to data scientists who can extract meaningful output from them. Handoffs fail when providers of solutions implement them and then disappear without ensuring that a sustainable plan is in place.
Organizations may also have difficulty interpreting the results of an AI model. Even if a model achieves a desired level of accuracy on test data, new or unanticipated failure cases often appear in real-life scenarios. An understanding of how the solution works may require a data scientist or “translator.” Without one, the NGO or other implementing organization may trust the model’s results too much: most AI models cannot perform accurately all the time, and many are described as “brittle” (that is, they fail when their inputs stray in specific ways from the data sets on which they were trained).

Fourth: Risks to be managed

AI tools and techniques can be misused by authorities and others who have access to them, so principles for their use must be established. AI solutions can also unintentionally harm the very people they are supposed to help.
An analysis of our use-case library found that four main categories of risk are particularly relevant when AI solutions are leveraged for social good: bias and fairness, privacy, safe use and security, and “explainability” (the ability to identify the feature or data set that leads to a particular decision or prediction).
Bias in AI may perpetuate and aggravate existing prejudices and social inequalities, affecting already-vulnerable populations and amplifying existing cultural prejudices. Bias of this kind may come about through problematic historical data, including unrepresentative or inaccurate sample sizes. For example, AI-based risk scoring for criminal-justice purposes may be trained on historical criminal data that include biases (among other things, African Americans may be unfairly labeled as high risk). As a result, AI risk scores would perpetuate this bias. Some AI applications already show large disparities in accuracy depending on the data used to train algorithms; for example, examination of facial-analysis software shows an error rate of 0.8 percent for light-skinned men; for dark-skinned women, the error rate is 34.7 percent.
One key source of bias can be poor data quality—for example, when data on past employment records are used to identify future candidates. An AI-powered recruiting tool used by one tech company was abandoned recently after several years of trials. It appeared to show systematic bias against women, which resulted from patterns in training data from years of hiring history. To counteract such biases, skilled and diverse data-science teams should take into account potential issues in the training data or sample intelligently from them.

Breaching the privacy of personal information could cause harm

Privacy concerns concerning sensitive personal data are already rife for AI. The ability to assuage these concerns could help speed public acceptance of its widespread use by profit-making and nonprofit organizations alike. The risk is that financial, tax, health, and similar records could become accessible through porous AI systems to people without a legitimate need to access them. That would cause embarrassment and, potentially, harm.

Safe use and security are essential for societal good uses of AI

Ensuring that AI applications are used safely and responsibly is an essential prerequisite for their widespread deployment for societal aims. Seeking to further social good with dangerous technologies would contradict the core mission and could also spark a backlash, given the potentially large number of people involved. For technologies that could affect life and well-being, it will be important to have safety mechanisms in place, including compliance with existing laws and regulations. For example, if AI misdiagnoses patients in hospitals that do not have a safety mechanism in place—particularly if these systems are directly connected to treatment processes—the outcomes could be catastrophic. The framework for accountability and liability for harm done by AI is still evolving.

Decisions made by complex AI models will need to become more readily explainable

Explaining in human terms the results from large, complex AI models remains one of the key challenges to acceptance by users and regulatory authorities. Opening the AI “black box” to show how decisions are made, as well as which factors, features, and data sets are decisive and which are not, will be important for the social use of AI. That will be especially true for stakeholders such as NGOs, which will require a basic level of transparency and will probably want to give clear explanations of the decisions they make. Explainability is especially important for use cases relating to decision making about individuals and, in particular, for cases related to justice and criminal identification, since an accused person must be able to appeal a decision in a meaningful way.

Mitigating risks

Effective mitigation strategies typically involve “human in the loop” interventions: humans are involved in the decision or analysis loop to validate models and double-check results from AI solutions. Such interventions may call for cross-functional teams, including domain experts, engineers, product managers, user-experience researchers, legal professionals, and others, to flag and assess possible unintended consequences.
Human analysis of the data used to train models may be able to identify issues such as bias and lack of representation. Fairness and security “red teams” could carry out solution tests, and in some cases third parties could be brought in to test solutions by using an adversarial approach. To mitigate this kind of bias, university researchers have demonstrated methods such as sampling the data with an understanding of their inherent bias and creating synthetic data sets based on known statistics.
Guardrails to prevent users from blindly trusting AI can be put in place. In medicine, for example, misdiagnoses can be devastating to patients. The problems include false-positive results that cause distress; wrong or unnecessary treatments or surgeries; or, even worse, false negatives, so that patients do not get the correct diagnosis until a disease has reached the terminal stage.
Technology may find some solutions to these challenges, including explainability. For example, nascent approaches to the transparency of models include local-interpretable-model-agnostic (LIME) explanations, which attempt to identify those parts of input data a trained model relies on most to make predictions.

Fifth: Scaling up the use of AI for social good

As with any technology deployment for social good, the scaling up and successful application of AI will depend on the willingness of a large group of stakeholders—including collectors and generators of data, as well as governments and NGOs—to engage. These are still the early days of AI’s deployment for social good, and considerable progress will be needed before the vast potential becomes a reality. Public- and private-sector players all have a role to play.

Improving data accessibility for social-impact cases

A wide range of stakeholders owns, controls, collects, or generates the data that could be deployed for AI solutions. Governments are among the most significant collectors of information, which can include tax, health, and education data. Massive volumes of data are also collected by private companies—including satellite operators, telecommunications firms, utilities, and technology companies that run digital platforms, as well as social-media sites and search operations. These data sets may contain highly confidential personal information that cannot be shared without being anonymized. But private operators may also commercialize their data sets, which may therefore be unavailable for pro-bono social-good cases.
Overcoming this accessibility challenge will probably require a global call to action to record data and make it more readily available for well-defined societal initiatives.
Data collectors and generators will need to be encouraged—and possibly mandated—to open access to subsets of their data when that could be in the clear public interest. This is already starting to happen in some areas. For example, many satellite data companies participate in the International Charter on Space and Major Disasters, which commits them to open access to satellite data during emergencies, such as the September 2018 tsunami in Indonesia and Hurricane Michael, which hit the US East Coast in October 2018.
Close collaboration between NGOs and data collectors and generators could also help facilitate this push to make data more accessible. Funding will be required from governments and foundations for initiatives to record and store data that could be used for social ends.
Even if the data are accessible, using them presents challenges. Continued investment will be needed to support high-quality data labeling. And multiple stakeholders will have to commit themselves to store data so that they can be accessed in a coordinated way and to use the same data-recording standards where possible to ensure seamless interoperability.
Issues of data quality and of potential bias and fairness will also have to be addressed if the data are to be deployed usefully. Transparency will be a key for bias and fairness. A deep understanding of the data, their provenance, and their characteristics must be captured, so that others using the data set understand the potential flaws.
All this is likely to require collaboration among companies, governments, and NGOs to set up regular data forums, in each industry, to work on the availability and accessibility of data and on connectivity issues. Ideally, these stakeholders would set global industry standards and collaborate closely on use cases to ensure that implementation becomes feasible.

Overcoming AI talent shortages is essential for implementing AI-based solutions for social impact

The long-term solution to the talent challenges we have identified will be to recruit more students to major in computer science and specialize in AI. That could be spurred by significant increases in funding—both grants and scholarships—for tertiary education and for PhDs in AI-related fields. Given the high salaries AI expertise commands today, the market may react with a surge in demand for such an education, although the advanced math skills needed could discourage many people.
Sustaining or even increasing current educational opportunities would be helpful. These opportunities include “AI residencies”—one-year training programs at corporate research labs—and shorter-term AI “boot camps” and academies for midcareer professionals. An advanced degree typically is not required for these programs, which can train participants in the practice of AI research without requiring them to spend years in a PhD program.
Given the shortage of experienced AI professionals in the social sector, companies with AI talent could play a major role in focusing more effort on AI solutions that have a social impact. For example, they could encourage employees to volunteer and support or coach noncommercial organizations that want to adopt, deploy, and sustain high-impact AI solutions. Companies and universities with AI talent could also allocate some of their research capacity to new social-benefit AI capabilities or solutions that cannot otherwise attract people with the requisite skills.
Overcoming the shortage of talent that can manage AI implementations will probably require governments and educational providers to work with companies and social-sector organizations to develop more free or low-cost online training courses. Foundations could provide funding for such initiatives.
Task forces of tech and business translators from governments, corporations, and social organizations, as well as freelancers, could be established to help teach NGOs about AI through relatable case studies. Beyond coaching, these task forces could help NGOs scope potential projects, support deployment, and plan sustainable road maps.
From the modest library of use cases that we have begun to compile, we can already see tremendous potential for using AI to address the world’s most important challenges. While that potential is impressive, turning it into reality on the scale it deserves will require focus, collaboration, goodwill, funding, and a determination among many stakeholders to work for the benefit of society. We are only just setting out on this journey. Reaching the destination will be a step-by-step process of confronting barriers and obstacles. We can see the moon, but getting there will require more work and a solid conviction that the goal is worth all the effort—for the sake of everyone.

About the author(s)

Michael Chui is a partner and James Manyika is chairman and a director of the McKinsey Global Institute. Martin Harrysson and Roger Roberts are partners in McKinsey’s Silicon Valley office, where Rita Chung is a consultant. Pieter Nel is a specialist in the New York office; Ashley van Heteren is an expert associate principal in the Amsterdam office.
Teaching Robots Right from Wrong?

Teaching Robots Right from Wrong?

Dec 7, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Vyacheslav Polonski and Jane Zavalishina

Curated by Helena M. Herrero Lamuedra

Today, it is difficult to imagine a technology that is as enthralling and terrifying as machine learning. While media coverage and research papers consistently tout the potential of machine learning to become the biggest driver of positive change in business and society, the lingering question on everyone’s mind is: “Well, what if it all goes terribly wrong?”

For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. Alarmist views on the terrifying potential of general AI abound in the media.

More often than not, these dystopian prophecies have been met with calls for a more ethical implementation of AI systems; that somehow engineers should imbue autonomous systems with a sense of ethics. According to some AI experts, we can teach our future robot overlords to tell right from wrong, akin to a “good Samaritan AI” that will always act justly on its own and help humans in distress.

Although this future is still decades away, today there is much uncertainty as to how, if at all, we will reach this level of general machine intelligence. But what is more crucial, at the moment, is that even the narrow AI applications that exist today require our urgent attention in the ways in which they are making moral decisions in practical day-to-day situations. For example, this is relevant when algorithms make decisions about who gets access to loans or when self-driving cars have to calculate the value of a human life in hazardous situations.

Teaching morality to machines is hard because humans can’t objectively convey morality in measurable metrics that make it easy for a computer to process. In fact, it is even questionable whether we, as humans have a sound understanding of morality at all that we can all agree on. In moral dilemmas, humans tend to rely on gut feeling instead of elaborate cost-benefit calculations. Machines, on the other hand, need explicit and objective metrics that can be clearly measured and optimized. For example, an AI player can excel in games with clear rules and boundaries by learning how to optimize the score through repeated playthroughs.

After its experiments with deep reinforcement learning on Atari video games, Alphabet’s DeepMind was able to beat the best human players of Go. Meanwhile, OpenAI amassed “lifetimes” of experiences to beat the best human players at the Valve Dota 2 tournament, one of the most popular e-sports competitions globally.

But in real-life situations, optimization problems are vastly more complex. For example, how do you teach a machine to algorithmically maximize fairness or to overcome racial and gender biases in its training data? A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.

This has led some authors to worry that a naive application of algorithms to everyday problems could amplify structural discrimination and reproduce biases in the data they are based on. In the worst case, algorithms could deny services to minorities, impede people’s employment opportunities or get the wrong political candidate elected.

Based on our experiences in machine learning, we believe there are three ways to begin designing more ethically aligned machines:

1. Define ethical behavior

AI researchers and ethicists need to formulate ethical values as quantifiable parameters. In other words, they need to provide machines with explicit answers and decision rules to any potential ethical dilemmas it might encounter. This would require that humans agree among themselves on the most ethical course of action in any given situation – a challenging but not impossible task. For example, Germany’s Ethics Commission on Automated and Connected Driving has recommended to specifically programme ethical values into self-driving cars to prioritize the protection of human life above all else. In the event of an unavoidable accident, the car should be “prohibited to offset victims against one another”. In other words, a car shouldn’t be able to choose whether to kill one person based on individual features, such as age, gender or physical/mental constitution when a crash is inescapable.

2. Crowdsource our morality

Engineers need to collect enough data on explicit ethical measures to appropriately train AI algorithms. Even after we have defined specific metrics for our ethical values, an AI system might still struggle to pick it up if there is not enough unbiased data to train the models. Getting appropriate data is challenging, because ethical norms cannot be always clearly standardized. Different situations require different ethical approaches, and in some situations there may not be a single ethical course of action at all – just think about lethal autonomous weapons that are currently being developed for military applications. One way of solving this would be to crowdsource potential solutions to moral dilemmas from millions of humans. For instance, MIT’s Moral Machine project shows how crowdsourced data can be used to effectively train machines to make better moral decisions in the context of self-driving cars.

3. Make AI transparent

Policymakers need to implement guidelines that make AI decisions with respect to ethics more transparent, especially with regard to ethical metrics and outcomes. If AI systems make mistakes or have undesired consequences, we cannot accept “the algorithm did it” as an adequate excuse. But we also know that demanding full algorithmic transparency is technically untenable (and, quite frankly, not very useful). Neural networks are simply too complex to be scrutinized by human inspectors. Instead, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. For self-driving cars, for instance, this could imply that detailed logs of all automated decisions are kept at all times to ensure their ethical accountability.

We believe that these three recommendations should be seen as a starting point for developing ethically aligned AI systems. Failing to imbue ethics into AI systems, we may be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. For example, in an unavoidable accident situation, self-driving cars will need to make some decision for better or worse. But if the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm. This means that we cannot simply refuse to quantify our values. By walking away from this critical ethical discussion, we are making an implicit moral choice. And as machine intelligence becomes increasingly pervasive in society, the price of inaction could be enormous – it could negatively affect the lives of billions of people.

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimized. For AI engineers, this may seem like a daunting task. After all, defining moral values is a challenge mankind has struggled with throughout its history. Nevertheless, the state of AI research requires us to finally define morality and to quantify it in explicit terms. Engineers cannot build a “good samaritan AI”, as long as they lack a formula for the good samaritan human.

Scientists Call Out Ethical Concerns for the Future of Neuro-technology

Scientists Call Out Ethical Concerns for the Future of Neuro-technology

Nov 27, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Edd Gent

Curated by Helena M. Herrero Lamuedra

For some die-hard tech evangelists, using neural interfaces to merge with AI is the inevitable next step in humankind’s evolution. But a group of 27 neuroscientists, ethicists, and machine learning experts have highlighted the myriad ethical pitfalls that could be waiting.

To be clear, it’s not just futurologists banking on the convergence of these emerging technologies. The Morningside Group estimates that private spending on neurotechnology is in the region of $100 million a year and growing fast, while in the US alone public funding since 2013 has passed the $500 million mark.

The group is made up of representatives from international brain research projects, tech companies like Google and neural interface startup Kernel, and academics from the US, Canada, Europe, Israel, China, Japan, and Australia. They met in May to discuss the ethics of neuro-technology and AI, and have now published their conclusions in the journal Nature.

While the authors concede it’s likely to be years or even decades before neural interfaces are used outside of limited medical contexts, they say we are headed towards a future where we can decode and manipulate people’s mental processes, communicate telepathically, and technologically augment human mental and physical capabilities.

“Such advances could revolutionize the treatment of many conditions…and transform human experience for the better,” they write. “But the technology could also exacerbate social inequalities and offer corporations, hackers, governments, or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency, and an understanding of individuals as entities bound by their bodies.”

“The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias.”

The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias. The first and last topics are already mainstays of warnings about the dangers of unregulated and unconscientious use of machine learning, and the problems and solutions the authors highlight are well-worn.

On privacy, the concerns are much the same as those raised about the reams of personal data corporations and governments are already hovering up. The added sensitivity of neural data makes the suggestion of an automatic opt-out for sharing of neural data and bans on individuals selling their data more feasible.

But other suggestions to use technological approaches to better protect data like “differential privacy,” “federated learning,” and blockchain are equally applicable to non-neural data. Similarly, the ability of machine learning algorithms to pick up bias inherent in training data is already a well-documented problem, and one with ramifications that go beyond just neuro-technology.

When it comes to identity, agency, and augmentation, though, the authors show how the convergence of AI and neuro-technology could result in entirely novel challenges that could test our assumptions about the nature of the self, personal responsibility, and what ties humans together as a species.

They ask the reader to imagine if machine learning algorithms combined with neural interfaces allowed a form of ‘auto-complete’ function that could fill the gap between intention and action, or if you could telepathically control devices at great distance or in collaboration with other minds. These are all realistic possibilities that could blur our understanding of who we are and what actions we can attribute as our own.

The authors suggest adding “neurorights” that protect identity and agency to international treaties like the Universal Declaration of Human Rights, or possibly the creation of a new international convention on the technology. This isn’t an entirely new idea; in May, I reported on a proposal for four new human rights to protect people against neural implants being used to monitor their thoughts or interfere with or hijack their mental processes.

But these rights were designed primarily to protect against coercive exploitation of neuro-technology or the data it produces. The concerns around identity and agency are more philosophical, and it’s less clear that new rights would be an effective way to deal with them. While the examples the authors highlight could be forced upon someone, they sound more like something a person would willingly adopt, potentially waiving rights in return for enhanced capabilities.

The authors suggest these rights could enshrine a requirement to educate people about the possible cognitive and emotional side effects of neuro-technologies rather than the purely medical impacts. That’s a sensible suggestion, but ultimately people may have to make up their own minds about what they are willing to give up in return for new abilities.

This leads to the authors’ final area of concern—augmentation. As neuro-technology makes it possible for people to enhance their mental, physical, and sensory capacities, it is likely to raise concerns about equitable access, pressure to keep up, and the potential for discrimination against those who don’t. There’s also the danger that military applications could lead to an arms race.

“The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground.”

The authors suggest that guidelines should be drawn up at both the national and international levels to set limits on augmentation in a similar way to those being drawn up to control gene editing in humans, but they admit that “any lines drawn will inevitably be blurry.” That’s because it’s hard to predict the impact these technologies will have and building international consensus will be hard because different cultures lend more weight to things like privacy and individuality than others.

The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground. In the end, they conclude that it may come down to the developers of the technology to ensure it does more good than harm. Individual engineers can’t be expected to shoulder this burden alone, though.

“History indicates that profit hunting will often trump social responsibility in the corporate world,” the authors write. “And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they aren’t prepared.”

For this reason, they say, industry and academia need to devise a code of conduct similar to the Hippocratic Oath doctors are required to take, and rigorous ethical training needs to become standard when joining a company or laboratory

Why Tomorrow’s Customers Won’t Shop at Today’s Retailers

Why Tomorrow’s Customers Won’t Shop at Today’s Retailers

Oct 18, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Dan Clay

Curated by Helena M. Herrero Lamuedra

Meet Dawn. Her T-shirt is connected to the internet, and her tattoo unlocks her car door. She’s never gone shopping, but she gets a package on her doorstep every week. She’s never been lost or late, and she’s never once waited in line. She never goes anywhere without visiting in VR first, and she doesn’t buy anything that wasn’t made just for her.

Dawn is an average 25-year-old in the not-so-distant future. She craves mobility, flexibility, and uniqueness; she spends more on experience than she does on products; she demands speed, transparency, and control; and she has enough choice to avoid any company that doesn’t give her what she wants. We’re in the midst of remarkable change not seen since the Industrial Revolution, and a noticeable gap is growing between what Dawn wants and what traditional retailers provide.

In 2005 Amazon launched free two-day shipping. In 2014 it launched free two-hour shipping. It’s hard to get faster than “Now,” and once immediacy becomes table stakes, competition will move to prediction. By intelligently applying data from our connected devices, smart digital assistants will be able to deliver products before we even acknowledge the need: Imagine a pharmacy that knows you’re about to get sick; an electronics retailer that knows you forgot your charger; an online merchant that knows you’re out of toilet paper; and a subscription service that knows you have a wedding coming up, have a little extra in your bank account, and that you look good in blue. Near-perfect predictions are the future of retail, and it’s up to CX and UX designers to ensure that they are greeted as miraculous time-savers rather than creepy intrusions.

Every product is personalized

While consumers are increasingly wary about how much of their personal data is being tracked, they’re also increasingly willing to trade privacy for more tangible benefits. It then falls on companies to ensure those benefits justify the exchange. In the retail space this increasingly means perfectly tailored products and a more personally relevant experience. Etsy recently acquired an AI startup to make its search experience more relevant and tailored. HelloAva provides customers with personalized skincare product recommendations based on machine learning combined with a few texts and a selfie. Amazon, constantly at the forefront of customer needs, recently acquired a patent for a custom clothing manufacturing system.

Market to the machines

Dawn, our customer of the future, won’t need to customize all of her purchases; for many of her needs, she’ll give her intelligent, IoT-enabled agent (think Alexa with a master’s degree) personalized filters so the agent can buy on her behalf. When Siri is choosing which shoes to rent, the robot almost becomes the customer, and retailers must win over smart AI assistants before they even reach end customers. Netflix already has a team of people working on this new realm of marketing to machines. As CEO Reed Hastings quipped at this year’s Mobile World Congress, “I’m not sure if in 20 to 50 years we are going to be entertaining you, or entertaining AIs.”

Branded, immersive experiences matter more than ever

As online shopping and automation increase, physical retail spaces will have to deliver much more than just a good shopping experience to compel people to visit. This could be through added education (like the expert stylists at Nordstrom’s store without any merchandise) or heightened service personalization (like Asics on-site 3D foot mapping and gait cycle analysis) or constantly evolving entertainment (like Gentle Monster’s Seoul flagship store’s monthly changing “exhibition“).

In this context, brand is becoming more than a value proposition or signifier—it’s the essential ingredient preventing companies from becoming commoditized by an on-demand, automated world where your car picks its own motor oil. Brands have a vital responsibility to create a community for customers to belong to and believe in.

A mobile world that feels like a single channel experience

Dawn will be increasingly mobile, and she’ll expect retailers to move along with her. She may research dresses on her phone and expect the store associate to know what she’s looked at. It’s no secret that mobile shopping is continuing to grow, but retailers need to think less about developing separate strategies for their channels and more about maintaining a continuous flow with the one channel that matters: the customer channel.

WeChat, for example, China’s largest social media channel, is used for everything from online shopping and paying at supermarkets to ordering a taxi and getting flight updates, creating a seamless “single channel” experience across all interactions. Snapchat’s new Context Cards, allowing users to read location-based reviews, business information and hail rides all within the app, builds towards a similar, single channel experience.

The future promises profound change. Yet perhaps the most pressing challenge for retailers is keeping up with customers’ expectations for immediacy, personalization, innovative experiences, and the other myriad ways technological and societal changes are making Dawn the most demanding customer the retail industry has ever seen. The future is daunting, but it’s also full of opportunity, and the retailers that can anticipate the needs of the customer of the future are well-poised for success in the years to come.

Keeping Up With New Work Culture

Keeping Up With New Work Culture

May 15, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Scott Scalon, Hunt Scalon Media

Curated by Helena M. Herrero Lamuedra

Companies are facing a radically shifting context for the workforce, the workplace, and the world of work, and these shifts have already changed the rules for nearly every organizational people practice, from learning and management to executive recruiting and the definition of work itself. Every business leader, no matter their function or industry, has experienced some form of radical work transformation, whether it be digitally in the form of social media, for example, demographically, or in countless other ways. Old paradigms are out, new ways of thinking are in — and talent, that one ‘commodity’ we’re all after is caught up in the middle of it all.

Almost 90 percent of HR and business leaders rate building the organization of the future as their highest priority, according to Deloitte’s latest Global Human Capital Trends report, “Rewriting the Rules for the Digital Age.” In the report, Deloitte issues a call-to-action for companies to completely reconsider their organizational structure, talent and HR strategies to keep pace with the disruption.

A Networked World of Work

“Technology is advancing at an unprecedented rate and these innovations have completely transformed the way we live, work and communicate,” said Josh Bersin, principal and founder, Bersin by Deloitte, Deloitte Consulting. “Ultimately, the digital world of work has changed the rules of business. Organizations should shift their entire mind-set and behaviors to ensure they can lead, organize, motivate, manage and engage the 21st century workforce, or risk being left behind.”

With more than 10,000 HR and business leaders in 140 countries weighing in, this massive study reveals that business leaders are turning to new organization models, which highlight the networked nature of today’s world of work. However, as business productivity often fails to keep pace with tecnological progress, Deloitte finds that HR leaders are struggling to keep up, with only 35 percent of them rating their capabilities as ‘good’ or ‘excellent.’

“As technology, artificial intelligence, and robotics transform business models and work, companies should start to rethink their management practices and organizational models,” said Brett Walsh, global human capital leader for Deloitte Global. “The future of work is driving the development of a set of ‘new rules’ that organizations should follow if they want to remain competitive.”

Talent Acquisition: Biggest Issue Facing Companies

As the workforce evolves, organizations are focusing on networks of teams, and recruiting and developing the right people is more consequential than ever. However, while Deloitte finds that cognitive technologies have helped leaders bring talent acquisition into the digital world, only 22 percent of survey respondents describe their companies as ‘excellent’ at building a differentiated employee experience once talent is acquired. In fact, the gap between talent acquisition’s importance and the ability to meet the need increased over last year‘s survey.


How Else the World of Work Is Changing

It is, indeed, a landscape of shifting priorities, and nowhere are we seeing this unfold more than among the group that matters most: job candidates. Five years ago, benefits topped their list of preferences. Today it’s culture and flexibility. Organizations need talented employees to drive strategy and achieve goals, but finding, recruiting and retaining people is becoming more difficult. While the severity of the issue varies among organizations, industries and geographies, it’s clear that this new landscape has created new demands. And organizations are scrambling.

It is critical, according to the report, to take an integrated approach to building the employee experience, with a large part of it centering on ‘careers and learning,’ which rose to second place on HRs’ and business leaders’ priority lists, with 83 percent of those surveyed ranking it as ‘important’ or ‘very important.’ Deloitte finds that as organizations shed legacy systems and dismantle yesterday’s hierarchies, it’s important to place a higher premium on implementing immersive learning experiences to develop leaders who can thrive in today’s digital world and appeal to diverse workforce needs.

The importance of leadership as a driver of the employee experience remains strong, as the percentage of companies with experiential programs for leaders rose nearly 20 percentage points from 47 percent in 2015 to 64 percent this year. Deloitte believes there is still a crucial need, however, for stronger and different types of leaders, particularly as today’s business world demands those who demonstrate more agile and digital capabilities.

Time to Rewrite the Rules

As organizations become more digital, leaders should consider disruptive technologies for every aspect of their human capital needs. Deloitte finds that 56 percent of companies are redesigning their HR programs to leverage digital and mobile tools, and 33 percent are already using some form of artificial intelligence (AI) applications to deliver HR solutions.

“HR and other business leaders tell us that they are being asked to create a digital workplace in order to become an ‘organization of the future,’” said Erica Volini, a principal with Deloitte Consulting LLP, and national managing director of the firm’s U.S. human capital practice. “To rewrite the rules on a broad scale, HR should play a leading role in helping the company redesign the organization by bringing digital technologies to both the workforce and to the HR organization itself.”

Deloitte found that the HR function is in the middle of a wide-ranging identity shift. To position themselves effectively as a key business advisor to the organization, it is important for HR to focus on service delivery efficiency and excellence in talent programs, as well as the entire design of work using a digital lens.

How Jobs Are Being Reinvented

While many jobs are being reinvented through technology and some tasks are being automated, Deloitte’s research shows that the essentially human aspects of work – such as empathy, communication, and problem solving – are becoming more important than ever.

This shift is not only driving an increased focus on reskilling, but also on the importance of people analytics to help organizations gain even greater insights into the capabilities of their workforce on a global scale. However, organizations continue to fall short in this area, with only eight percent reporting they have usable data, and only nine percent believing they have a good understanding of the talent factors that drive performance in this new world of work.

One of the new rules for the digital age is to expand our vision of the workforce; think about jobs in the context of tasks that can be automated (or outsourced) and the new role of human skills; and focus even more heavily on the customer experience, employee experience, and employment value proposition for people.

This challenge requires major cross-functional attention, effort, and collaboration. It also represents one of the biggest opportunities for the HR organization. To be able to rewrite the rules, HR needs to prove it has the insights and capabilities to successfully play outside the lines.