The Trust Crisis.

The Trust Crisis.

Businesses put an awful lot of effort into meeting the diverse needs of their stakeholders — customers, investors, employees, and society at large. But they’re not paying enough attention to one ingredient that’s crucial to productive relationships with those stakeholders: trust.

Trust, as defined by organizational scholars, is our willingness to be vulnerable to the actions of others because we believe they have good intentions and will behave well toward us. In other words, we let others have power over us because we think they won’t hurt us and will in fact help us. When we decide to interact with a company, we believe it won’t deceive us or abuse its relationship with us. However, trust is a double-edged sword. Our willingness to be vulnerable also means that our trust can be betrayed. And over and over, businesses have betrayed stakeholders’ trust.

Consider Facebook. In April 2018, CEO Mark Zuckerberg came before Congress and was questioned about Facebook’s commitment to data privacy after it came to light that the company had exposed the personal data of 87 million users to the political consultant Cambridge Analytica, which used it to target voters during the 2016 U.S. presidential election. Then, in September, Facebook admitted that hackers had gained access to the log-in information of 50 million of its users. The year closed out with a New York Times investigation revealing that Facebook had given Netflix, Spotify, Microsoft, Yahoo, and Amazon access to its users’ personal data, including in some cases their private messages.

So, in the middle of last year, when Zuckerberg announced that Facebook would launch a dating app, observers shook their heads. And this past April, when the company announced it was releasing an app that allowed people to share photos and make video calls on its smart-home gadget Portal, TechCrunch observed that “critics were mostly surprised by the device’s quality but too freaked out to recommend it.” Why would we trust Facebook with personal data on something as sensitive as dating — or with a camera and microphone — given its horrible track record?

THE AUTHORS

SANDRA J. SUCHER AND SHALENE GUPTA

Sandra Sucher has always been fascinated by the dynamics of trust, whether she was watching them unfold on the job or studying them through the lens of academia. One moment in particular stands out from her days as the chief quality officer at Fidelity, where she worked before teaching and doing research at Harvard Business School. To explore potential improvements to customer service, she asked her colleagues in market research to study what happened when a customer experienced a problem at the company. Their counterintuitive finding: Trust actually increased when something went wrong and the company fixed it to the customer’s satisfaction. “This is so different from how we normally think about trust,” she says. “It’s not as fragile as we believe. Trust can actually be regained, and it may not be lost forever.”

Before becoming a research associate at Harvard Business School, Shalene Gupta worked in a variety of industries, from government to education to journalism. In each she was struck by how critical trust was to simply making things work on a day-to-day level. During her public sector days, she saw the plethora of rules, checks, and balances in place because taxpayers didn’t trust the government to spend money wisely. When she taught English in Malaysia as a Fulbright Scholar, she needed to gain the trust of students and parents from a different culture to be effective. As a journalist at Fortune, she had to earn the trust of her sources so that they would talk to her, while also ensuring that readers trusted her objectivity.

All of this, plus her work at HBS, made clear to her that business is really a series of relationships that have to be fostered by trust. “The dialogue tends to be more about numbers, profits, and growth,” she says, “not about the underlying context that allows numbers, profits, and growth to happen.”

Currently, Sucher and Gupta are cowriting a book on trust, Trusted: How Companies Build It, Lose It, and Regain It.

Volkswagen is still struggling with the aftermath of the 2015 revelation that it cheated on emissions tests. United Airlines has yet to fully recover from two self-inflicted wounds: getting security to drag a doctor off a plane after he resisted giving up his seat in 2017, and the death of a puppy on a plane in 2018 after a flight attendant insisted its owner put it in an overhead bin. In the spring of 2019 Boeing had to be forced by a presidential order to ground its 737 Max jets in the United States, even though crashes had killed everyone on board two planes in five months and some 42 other countries had forbidden the jets to fly. Later the news broke that Boeing had known there was a problem with the jet’s safety features as early as 2017 but failed to disclose it. Now, customers, pilots and crew, and regulators all over the world are wondering why they should trust Boeing. Whose interests was it serving?

Betrayals of trust have major financial consequences. In 2018 the Economist studied eight of the largest recent business scandals, comparing the companies involved with their peer groups, and found that they had forfeited significant amounts of value. The median firm was worth 30% less than it would have been valued had it not experienced a scandal. That same year another study, by IBM Security and Ponemon Institute, put the average cost of a data breach at $3.86 million, a 6.4% increase over the year before, and calculated that on average each stolen record cost a company $148.

Creating trust, in contrast, lifts performance. In a 1999 study of Holiday Inns, 6,500 employees rated their trust in their managers on a scale of 1 to 5. The researchers found that a one-eighth point improvement in scores could be expected to increase an inn’s annual profits by 2.5% of revenues, or $250,000 more per hotel. No other aspect of managers’ behavior had such a large impact on profits.

Trust also has macro level benefits. A 1997 study of 29 market economies across one decade by World Bank economists showed that a 10-percentage-point increase in trust in an environment was correlated with a 0.8-percentage-point bump in per capita income growth.

So our need to trust and be trusted has a very real economic impact. More than that, it deeply affects the fabric of society. If we can’t trust other people, we’ll avoid interacting with them, which will make it hard to build anything, solve problems, or innovate.

Building trust isn’t glamorous or easy. And at times it involves making complex decisions and difficult trade-offs.

In her 15 years of research into what trusted companies doSandra has found — no surprise — that they have strong relationships with all their main stakeholders. But the behaviors and processes that built those relationships were surprising. She has distilled her findings into a framework that can help companies nurture and maintain trust. It explains the basic promises stakeholders expect a company to keep, the four ways they evaluate companies for trustworthiness, and five myths that prevent companies from rebuilding trust.

WHAT STAKEHOLDERS WANT

Companies can’t build trust unless they understand the fundamental promises they make to stakeholders. Firms have three kinds of responsibilities: Economically, people count on them to provide value. Legally, people expect them to follow not just the letter of the law but also its spirit. Ethically, people want companies to pursue moral ends, through moral means, for moral motives.

Of course, expectations can vary within a stakeholder group, leading to ambiguity about what companies need to live up to. Investors are a prime example. Some believe the only duty of a company is to maximize shareholder returns, while others think companies have an obligation to create positive societal effects by employing sound environmental, social, and governance practices.

HOW STAKEHOLDERS EVALUATE TRUST

Trust is multifaceted: Not only do stakeholders depend on businesses for different things, but they may trust an organization in some ways but not others. To judge the worthiness of companies, stakeholders continually ask four questions. Let’s look at each in turn.

Is the company competent?

At the most fundamental level companies are evaluated on their ability to create and deliver a product or service. There are two aspects to this:

Technical competence refers to the nuts and bolts of developing, manufacturing, and selling products or services. It includes the ability to innovate, to harness technological advances, and to marshal resources and talent.

Social competence involves understanding the business environment and sensing and responding to changes. A company must have insight into different markets and what offerings may be attractive to them now and in the future. It also needs to recognize how competition is shifting and know how to work with partners such as suppliers, government authorities, regulators, NGOs, the media, and unions.

In the short term technical competence wins customers, but in the long run social competence is necessary to build a company that can navigate a constantly evolving business landscape.

Consider Uber. The company has weathered an avalanche of scandals, including reports of sexual harassment, a toxic corporate culture, and shady business practices in 2017, which led to CEO Travis Kalanick’s departure. Uber’s losses that year came to $4.5 billion. And yet, by the end of 2018, Uber was operating in 63 countries and had 91 million active monthly users. We love Uber, we hate Uber, and sometimes we leave Uber.

We keep using Uber not because we don’t care about its mistakes but because Uber fills a need and does it well. Consumers trust that when they put an order into Uber a car will arrive to pick them up. We forget how difficult that is to do. In 2007, two years before Uber’s launch, an app called Taxi Magic entered the market. Taxi Magic worked with fleet owners, and drivers leased cars from the fleet owners, so there was little accountability. If a cab saw another passenger on its way to pick up a Taxi Magic rider, it might abandon the Taxi Magic customer. In 2009, another start-up, Cabulous, also created an app that people could use to book rides. However, that app often didn’t work, and Cabulous had no means of regulating supply and demand, so taxi drivers wouldn’t turn on the app when they were busy. Neither business achieved anything on the scale of Uber. We might have mixed feelings about Uber’s surge pricing, but it helps make sure there are enough drivers on the road to meet demand.

Meanwhile, on a social level Uber has managed to transform the taxi industry. Before Uber, cities limited the number of taxis in the streets by requiring drivers to purchase medallions. In 2013, a medallion in New York City could cost as much as $1.32 million. Such sky-high prices made it difficult for newcomers to enter the market, and lack of competition meant drivers had little incentive to provide good service. Uber brought new drivers into the market, improved service, and increased accessibility to rides in areas with limited taxi coverage.

Still, we use Uber with mixed feelings. Uber achieved much of its growth by quickly acquiring capital, which allowed it to develop technology for fast pickups and to offer drivers high pay and riders low fares. At the same time it was a ruthless competitor that reportedly was not above using underhanded tactics, such as ordering and then cancelling Lyft rides (a charge Uber denied) and misleading drivers about their potential earnings.

We don’t trust Uber to treat its employees or customers well or to conduct business cleanly. In other words we don’t trust Uber’s motives, means, or impact. This has consequences. Although Uber was projected to reach 44 million users in 2017, it hit only 41 million. Since then Uber’s growth has continued to be lower than expected, and the company has ceded market share to Lyft. This year Uber’s much-anticipated IPO underperformed after thousands of Uber drivers went on strike to protest their working conditions. The company’s stock price fell by 11% after its first earnings report for 2019 revealed that it had lost more than $1 billion in its first quarter.

Is the company motivated to serve others’ interests as well as its own?

Stakeholders need to believe a company is doing what’s good for them, not just what’s best for itself. However, stakeholders’ concerns and goals aren’t all the same. While many actions can serve multiple parties, companies must figure out how to prioritize stakeholder interests and avoid harming one group in an attempt to benefit another.

To determine whether they’re doing right by all of their stakeholders, companies should examine their own motivations — by asking these three questions:

  • Do we tell the truth?
  • On whose behalf are we acting?
  • Do our actions actually benefit those who trust us?

Honeywell is an example of a company that works hard to serve — and balance — the needs of all its stakeholders. Let’s look at what happened there during the Great Recession, when it needed to reduce costs but wanted to keep making good on stakeholder expectations. Dave Cote, Honeywell’s CEO at the time, explained how the company thought about that challenge: “We have these three constituencies we have to manage. If we don’t do a great job with customers, both employees and investors will get hurt. So we said our first priority is customers. We need to make sure we can still deliver, that it’s a quality product, and that if we’ve committed to a project, it will get done on time.”

For investors and employees, he continued, “we have to balance the pain, because if you’re in the middle of a recession, there’s going to be pain….Investors need to know they can count on the company, that we’re also going to be doing all the right things for the long term, but we’re thinking about them. After all, they’re the owners of the company, and we work for them.…But at the same time we need to recognize that the employees are the base for success in the future…and we need to be thoughtful about how we treat them. And I think if you get the balance right between those two, yeah, investors might not be as happy in the short term if you could have generated more earnings, but they’re definitely going to be happier in the longer term. Employees might not be as happy in the short term because they might have preferred that you just say to heck with all the investors. But in the long term they’re going to benefit also because you’re going to have a much more robust company for them to be part of.”

During the recession, Honeywell used furloughs, rather than layoffs, to lower payroll costs. But it limited the scale and duration of the furloughs by first implementing a hiring freeze, eliminating wage increases, reducing overtime, temporarily halting the employee rewards and recognition program, and cutting the company match for 401(k)s from 100% to 50%. The company distributed a reduced bonus pool as restricted stock so that employees could share in the stock’s post-recovery upside. And Cote and his entire leadership team refused to take a bonus in 2009, reinforcing the message of shared pain.

To protect customers’ interests during the downturn, Honeywell came up with the idea of placing advance orders with suppliers that the company would activate as soon as sales picked up. Suppliers were happy with the guaranteed production, and Honeywell stole a march on its competitors by filling customer orders faster than they could as the recovery began.

In the long run, those moves paid off for investors. During the recovery, from 2009 to 2012, they were rewarded with a 75% increase in Honeywell’s total stock value — which was 20 percentage points higher than the stock value increase of its nearest competitor.

Cote also built trust with the public by moving from a previous approach of litigating claims for asbestos and environmental damage to settling them. Honeywell began to issue payouts of $150 million for claims annually, making its liabilities more manageable and easing investors’ worries about future litigation costs. Cote also systematically went about cleaning up contaminated sites. That kind of attention to the interests of stakeholders gave people faith in the company’s good intentions.

Does the company use fair means to achieve its goals?

A company’s approach to dealing with customers, employees, investors, and society often comes under scrutiny. Companies that are trusted are given more leeway to create rules of engagement. Companies that aren’t face regulation. Just ask Facebook.

To build strong trust, firms need to understand — and measure up on — four types of fairness that organizational scholars have identified:

Procedural fairness: Whether good processes, based on accurate data, are used to make decisions and are applied consistently, and whether groups are given a voice in decisions affecting them.
Distributive fairness: How resources (such as pay and promotions) or pain points (such as layoffs) are allocated.
Interpersonal fairness: How well stakeholders are treated.
Informational fairness: Whether communication is honest and clear. (In a 2011 studyJason Colquitt and Jessica Rodell found that this was the most important aspect for developing trust.)

The French tire maker Michelin learned how important it is to have fair processes in 1999, when it decided to cut 7,500 jobs after posting a 17% increase in profits. The outrage in response to that move was so great that eventually the French prime minister outlawed subsidies for any business making layoffs without proof of financial duress.

So in 2003, when Michelin realized it would have to continue restructuring to remain competitive, the company decided it needed to find a better way. It spent the next decade developing new approaches to managing change in its manufacturing facilities. The first strategy, called “ramp down and up,” focused on shifting resources among plants — closing some while expanding others — as new products were brought on line and market needs evolved. Under this strategy, Michelin made every effort to keep affected employees in jobs at Michelin. The company would help them relocate to factories that were growing and provided support for the transition, such as assistance finding housing and information on the schools in their new towns. When relocation was not an option, Michelin would provide employees training in skills needed for jobs that were available locally and offer them professional counseling and support groups.

Success with the ramp-down-and-up approach led Michelin’s leaders to later devise a bolder “turnaround” strategy, under which the management and employees of factories at risk of being shut down could propose detailed business plans to return them to profitability. If accepted, the plans would trigger investment from Michelin.

In carrying out these new approaches, the company demonstrated procedural, informational, interpersonal, and distributive fairness. In total it conducted 15 restructuring programs from 2003 to 2013, which included closing some plants while growing others and changing the mix of production capabilities among plants. But those reorganization efforts didn’t get a lot of flack from the media, because the public didn’t sound the alarm. In 2015, Michelin’s first plant turnaround won the support of 95% of the factory’s unionized workers. Michelin had demonstrated that it would use its power to treat employees fairly.

Does the company take responsibility for all its impact?

If stakeholders don’t believe a company will produce positive effects, they’ll limit its power. Part of the reason we have trouble forgiving Facebook is that its impact has been so enormous. The company might never have imagined that a hostile government would use its platform to influence an election or that a political consulting firm would harvest its users’ data without their consent, but that’s exactly what happened. And ultimately, what happens on Facebook’s platform is seen as the responsibility of Facebook.

Wanting to generate beneficial effects isn’t enough. Companies should carefully define the kind of impact they desire and then devise ways to measure and foster it. They must also have a plan for handling any unintended impact when it happens.

Pinterest, the social media platform, offers a good counterpoint to Facebook. Pinterest has very clearly defined the impact it wants to have on the world. Its mission statement reads: “Our mission is to help you discover and do what you love. That means showing you ideas that are relevant, interesting, and personal to you, and making sure you don’t see anything that’s inappropriate or spammy.”

In extensive community guidelines, Pinterest details what it doesn’t allow. For example, the company explains that it will “remove hate speech and discrimination, or groups and people that advocate either.” Pinterest then elaborates: “Hate speech includes serious attacks on people based on their race, ethnicity, national origin, religion, gender identity, sexual orientation, disability or medical condition. Also, please don’t target people based on their age, weight, immigration or veteran status.”

The company trains reviewers to screen the content on its site to enforce its guidelines. Every six months it updates the training and guidelines, even though the process is time-consuming and expensive.

In fall of 2018, when people in the anti-vaccine movement chose to use the platform to spread their message, Pinterest took a simple yet effective step: It banned vaccination searches. Now if you search for vaccinations on the platform, nothing shows up. A few months later, Pinterest blocked accounts promoting fake cancer treatments and other nonscientifically vetted medical goods.

The company continues to work with outside experts to improve its ability to stop disinformation on its site. Pinterest understands that, given its estimated 250 million users, its platform could be both used and abused, and has taken action to ensure that it doesn’t become a vehicle for causing harm.

HOW TO BUILD AND REBUILD TRUST

Trust is less fragile than we think. Companies can be trusted in some ways but not others and still succeed. And trust can also be rebuilt.

Take the Japanese conglomerate Recruit Holdings. Its core businesses are advertising and online search and staffing, but its life-event platforms have transformed the way people find jobs, get married, and buy cars and houses, while its lifestyle platforms help customers book hair and nail appointments, make restaurant reservations, and take vacations.

From the beginning, Recruit designed its offerings around the principles of creating value and contributing to society. At the time it was launched, in 1960, large Japanese companies typically found new hires by hosting exams for job candidates at the top universities. Smaller companies that couldn’t afford to host exams and students at other universities were shut out of the process. So Recruit’s founder, Hiromasa Ezoe, started a magazine in which employers of all sizes could post job advertisements that could reach students at any university. Soon Recruit added such businesses as a magazine for selling secondhand cars and the first job recruitment magazine aimed specifically at women.

However, in the 1980s, disaster struck the company. Ezoe was caught offering shares in a subsidiary to business, government, and media leaders before it went public. In all, 159 people were implicated in the scandal, and Japan’s prime minister and his entire cabinet were forced to resign. A few years later one of Recruit’s subsidiaries failed, saddling the company with annual interest payments that exceeded its annual income by 3 billion yen. Not long after that, Recruit suffered another major blow, when a main source of revenue, print advertising, was devastated by the rise of the internet.

This sequence of events would have easily felled another company, yet in 2018 Recruit had 40,000 employees and sales of $20 billion and operated in more than 60 countries. Today it’s an internet giant, running 264 magazines (most online), some 200 websites, and 350 mobile apps. Despite its setbacks, Recruit continued to attract customers, nurture the best efforts of committed employees, and reward investors, and regained the respect of society.

To many executives, what Recruit pulled off sounds impossible. That may be because they subscribe to five popular myths that prevent people from understanding how to build and rebuild trust. Let’s explore each of those myths and see how Recruit’s experiences prove them wrong.

Myth: Trust has no boundaries.
Reality: Trust is limited.

Trust has three main components: the trusted party, the trusting party, and the action the trusted party is expected to perform. It’s built by creating real but narrowly defined relationships.

Recruit was respected for its competence and, in particular, the way it trained its advertising salespeople to actively observe customers and come up with ways to make their businesses more successful. In the wake of the scandal, Recruit kept focusing on delivering the same high level of service. Because the stock violation didn’t alter the company’s ability to meet customers’ expectations of competence, customers were willing to overlook it, and Recruit lost very few of them.

Myth: Trust is objective.
Reality: Trust is subjective.

Trust is based on the judgment of people and groups, not on some universal code of good conduct. If trust were a universal standard, Recruit’s scandal would have led to its demise. However, even if society was outraged by the founder’s actions (employees recalled that their children were embarrassed by their parents’ jobs), customers still believed that Recruit’s employees had their interests at heart. In time customers’ trust led to increased profits, which made Recruit attractive to investors and society.

Myth: Trust is managed from the outside in — by controlling a firm’s external image.
Reality: Trust is managed from the inside out — by running a good business.

All too often managers believe that improving a company’s reputation is the work of advertising and PR firms or ever-vigilant online image-protection platforms. In actuality, reputation is an output that results when a company uses fair processes to deal with stakeholders. Be trustworthy and you will be trusted. Recruit had not only a track record for delivering good products and good service but a salesforce that was willing to work to save the company. Why? Because it had created a culture and systems that engaged and motivated employees. Employees wanted to save Recruit because they could not imagine a better place to work.

Recruit was built on the belief that employees do their best work when they discover a passion and learn to rely on themselves to pursue it. The company’s motto is “Create your own opportunities and let the opportunities change you.” Managers ask employees “Why are you here?” to help them invent projects that link their passions to a contribution to society. Here’s how one employee in Recruit’s Lifestyle Company recently described his project: “I’m involved with the development of a smartphone app…which helps men monitor their fertility and lower the obstacles they face in trying to conceive.…It is a real challenge to envision products that do not yet exist and make them real, but I am confident that in some small way my creative abilities can provide a service that will help people.”

To ensure that all employees feel inspired by their work, Recruit makes them a unique offer: Once they reach the age of 35, they have the option of taking a $75,000 retirement bonus, providing they’ve been at Recruit at least six and a half years. The amount of the bonus increases as employees grow older. This offer is accompanied by career coaching that helps people make the right choice. People who have other dreams use the bonus to transition to different careers, making way for new employees with fresh perspectives on the needs of customers and society.

Myth: Companies are judged for their purpose.
Reality: Companies are judged for their purpose and their impact.

Recruit’s purpose had always been to add value to society. However, that did not protect the company from fallout from the scandal. Recruit was forced to take responsibility for the impact it had on the country before it could regain people’s trust. Because its senior managers understood this, they disregarded PR’s dictate not to discuss the scandal and told employees they could too. Kazuhiro Fujihara, who was the head of sales at the time, explains: “I gathered my employees and told them we could criticize the company for what it had done. PR said we couldn’t criticize the company, but I ignored that.” Today, Recruit has a section on its website describing the scandal, what it learned, and the actions it took to ensure that it would not let something similar happen again. Recruit was well aware that even though the scandal was caused by its founder, Ezoe’s actions were still its responsibility.

Myth: Trust is fragile. Once lost, it can never be regained.
Reality: Trust waxes and wanes.

More than three decades later, Recruit’s stock scandal is still infamous, but the company is thriving. The fall from grace was, Recruit says on its website, “an opportunity to transform ourselves into a new Recruit by encouraging each employee to confront the situation, think, make suggestions, and take action with a sense of ownership rather than waiting passively based on the assumption that the management team would rectify the situation. All proposals were welcomed, including those concerning new business undertakings and business improvements, provided they were forward looking.” That approach helped Recruit evolve and grow. It has expanded so much internationally, in fact, that 46% of revenues now come from outside Japan (up from 3.6% in 2011).

. . .

Now that we’ve broken down what trust is made of, let’s put it all together.

Building trust depends not on good PR but rather on clear purpose, smart strategy, and definitive action. It takes courage and common sense. It requires recognizing all the people and groups your company affects and focusing on serving their interests, not just your firm’s. It means being competent, playing fair, and most of all, acknowledging and, if necessary, remediating, all the impact your company has, whether intended or not.

It’s not always possible to make decisions that completely delight each of your stakeholder groups, but it is possible to make decisions that keep faith with and retain the trust they have in your company.THE BIG IDEA

Advertisement
Teaching Robots Right from Wrong?

Teaching Robots Right from Wrong?

Dec 7, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Vyacheslav Polonski and Jane Zavalishina

Curated by Helena M. Herrero Lamuedra

Today, it is difficult to imagine a technology that is as enthralling and terrifying as machine learning. While media coverage and research papers consistently tout the potential of machine learning to become the biggest driver of positive change in business and society, the lingering question on everyone’s mind is: “Well, what if it all goes terribly wrong?”

For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. Alarmist views on the terrifying potential of general AI abound in the media.

More often than not, these dystopian prophecies have been met with calls for a more ethical implementation of AI systems; that somehow engineers should imbue autonomous systems with a sense of ethics. According to some AI experts, we can teach our future robot overlords to tell right from wrong, akin to a “good Samaritan AI” that will always act justly on its own and help humans in distress.

Although this future is still decades away, today there is much uncertainty as to how, if at all, we will reach this level of general machine intelligence. But what is more crucial, at the moment, is that even the narrow AI applications that exist today require our urgent attention in the ways in which they are making moral decisions in practical day-to-day situations. For example, this is relevant when algorithms make decisions about who gets access to loans or when self-driving cars have to calculate the value of a human life in hazardous situations.

Teaching morality to machines is hard because humans can’t objectively convey morality in measurable metrics that make it easy for a computer to process. In fact, it is even questionable whether we, as humans have a sound understanding of morality at all that we can all agree on. In moral dilemmas, humans tend to rely on gut feeling instead of elaborate cost-benefit calculations. Machines, on the other hand, need explicit and objective metrics that can be clearly measured and optimized. For example, an AI player can excel in games with clear rules and boundaries by learning how to optimize the score through repeated playthroughs.

After its experiments with deep reinforcement learning on Atari video games, Alphabet’s DeepMind was able to beat the best human players of Go. Meanwhile, OpenAI amassed “lifetimes” of experiences to beat the best human players at the Valve Dota 2 tournament, one of the most popular e-sports competitions globally.

But in real-life situations, optimization problems are vastly more complex. For example, how do you teach a machine to algorithmically maximize fairness or to overcome racial and gender biases in its training data? A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.

This has led some authors to worry that a naive application of algorithms to everyday problems could amplify structural discrimination and reproduce biases in the data they are based on. In the worst case, algorithms could deny services to minorities, impede people’s employment opportunities or get the wrong political candidate elected.

Based on our experiences in machine learning, we believe there are three ways to begin designing more ethically aligned machines:

1. Define ethical behavior

AI researchers and ethicists need to formulate ethical values as quantifiable parameters. In other words, they need to provide machines with explicit answers and decision rules to any potential ethical dilemmas it might encounter. This would require that humans agree among themselves on the most ethical course of action in any given situation – a challenging but not impossible task. For example, Germany’s Ethics Commission on Automated and Connected Driving has recommended to specifically programme ethical values into self-driving cars to prioritize the protection of human life above all else. In the event of an unavoidable accident, the car should be “prohibited to offset victims against one another”. In other words, a car shouldn’t be able to choose whether to kill one person based on individual features, such as age, gender or physical/mental constitution when a crash is inescapable.

2. Crowdsource our morality

Engineers need to collect enough data on explicit ethical measures to appropriately train AI algorithms. Even after we have defined specific metrics for our ethical values, an AI system might still struggle to pick it up if there is not enough unbiased data to train the models. Getting appropriate data is challenging, because ethical norms cannot be always clearly standardized. Different situations require different ethical approaches, and in some situations there may not be a single ethical course of action at all – just think about lethal autonomous weapons that are currently being developed for military applications. One way of solving this would be to crowdsource potential solutions to moral dilemmas from millions of humans. For instance, MIT’s Moral Machine project shows how crowdsourced data can be used to effectively train machines to make better moral decisions in the context of self-driving cars.

3. Make AI transparent

Policymakers need to implement guidelines that make AI decisions with respect to ethics more transparent, especially with regard to ethical metrics and outcomes. If AI systems make mistakes or have undesired consequences, we cannot accept “the algorithm did it” as an adequate excuse. But we also know that demanding full algorithmic transparency is technically untenable (and, quite frankly, not very useful). Neural networks are simply too complex to be scrutinized by human inspectors. Instead, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. For self-driving cars, for instance, this could imply that detailed logs of all automated decisions are kept at all times to ensure their ethical accountability.

We believe that these three recommendations should be seen as a starting point for developing ethically aligned AI systems. Failing to imbue ethics into AI systems, we may be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. For example, in an unavoidable accident situation, self-driving cars will need to make some decision for better or worse. But if the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm. This means that we cannot simply refuse to quantify our values. By walking away from this critical ethical discussion, we are making an implicit moral choice. And as machine intelligence becomes increasingly pervasive in society, the price of inaction could be enormous – it could negatively affect the lives of billions of people.

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimized. For AI engineers, this may seem like a daunting task. After all, defining moral values is a challenge mankind has struggled with throughout its history. Nevertheless, the state of AI research requires us to finally define morality and to quantify it in explicit terms. Engineers cannot build a “good samaritan AI”, as long as they lack a formula for the good samaritan human.

Scientists Call Out Ethical Concerns for the Future of Neuro-technology

Scientists Call Out Ethical Concerns for the Future of Neuro-technology

Nov 27, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Edd Gent

Curated by Helena M. Herrero Lamuedra

For some die-hard tech evangelists, using neural interfaces to merge with AI is the inevitable next step in humankind’s evolution. But a group of 27 neuroscientists, ethicists, and machine learning experts have highlighted the myriad ethical pitfalls that could be waiting.

To be clear, it’s not just futurologists banking on the convergence of these emerging technologies. The Morningside Group estimates that private spending on neurotechnology is in the region of $100 million a year and growing fast, while in the US alone public funding since 2013 has passed the $500 million mark.

The group is made up of representatives from international brain research projects, tech companies like Google and neural interface startup Kernel, and academics from the US, Canada, Europe, Israel, China, Japan, and Australia. They met in May to discuss the ethics of neuro-technology and AI, and have now published their conclusions in the journal Nature.

While the authors concede it’s likely to be years or even decades before neural interfaces are used outside of limited medical contexts, they say we are headed towards a future where we can decode and manipulate people’s mental processes, communicate telepathically, and technologically augment human mental and physical capabilities.

“Such advances could revolutionize the treatment of many conditions…and transform human experience for the better,” they write. “But the technology could also exacerbate social inequalities and offer corporations, hackers, governments, or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency, and an understanding of individuals as entities bound by their bodies.”

“The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias.”

The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias. The first and last topics are already mainstays of warnings about the dangers of unregulated and unconscientious use of machine learning, and the problems and solutions the authors highlight are well-worn.

On privacy, the concerns are much the same as those raised about the reams of personal data corporations and governments are already hovering up. The added sensitivity of neural data makes the suggestion of an automatic opt-out for sharing of neural data and bans on individuals selling their data more feasible.

But other suggestions to use technological approaches to better protect data like “differential privacy,” “federated learning,” and blockchain are equally applicable to non-neural data. Similarly, the ability of machine learning algorithms to pick up bias inherent in training data is already a well-documented problem, and one with ramifications that go beyond just neuro-technology.

When it comes to identity, agency, and augmentation, though, the authors show how the convergence of AI and neuro-technology could result in entirely novel challenges that could test our assumptions about the nature of the self, personal responsibility, and what ties humans together as a species.

They ask the reader to imagine if machine learning algorithms combined with neural interfaces allowed a form of ‘auto-complete’ function that could fill the gap between intention and action, or if you could telepathically control devices at great distance or in collaboration with other minds. These are all realistic possibilities that could blur our understanding of who we are and what actions we can attribute as our own.

The authors suggest adding “neurorights” that protect identity and agency to international treaties like the Universal Declaration of Human Rights, or possibly the creation of a new international convention on the technology. This isn’t an entirely new idea; in May, I reported on a proposal for four new human rights to protect people against neural implants being used to monitor their thoughts or interfere with or hijack their mental processes.

But these rights were designed primarily to protect against coercive exploitation of neuro-technology or the data it produces. The concerns around identity and agency are more philosophical, and it’s less clear that new rights would be an effective way to deal with them. While the examples the authors highlight could be forced upon someone, they sound more like something a person would willingly adopt, potentially waiving rights in return for enhanced capabilities.

The authors suggest these rights could enshrine a requirement to educate people about the possible cognitive and emotional side effects of neuro-technologies rather than the purely medical impacts. That’s a sensible suggestion, but ultimately people may have to make up their own minds about what they are willing to give up in return for new abilities.

This leads to the authors’ final area of concern—augmentation. As neuro-technology makes it possible for people to enhance their mental, physical, and sensory capacities, it is likely to raise concerns about equitable access, pressure to keep up, and the potential for discrimination against those who don’t. There’s also the danger that military applications could lead to an arms race.

“The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground.”

The authors suggest that guidelines should be drawn up at both the national and international levels to set limits on augmentation in a similar way to those being drawn up to control gene editing in humans, but they admit that “any lines drawn will inevitably be blurry.” That’s because it’s hard to predict the impact these technologies will have and building international consensus will be hard because different cultures lend more weight to things like privacy and individuality than others.

The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground. In the end, they conclude that it may come down to the developers of the technology to ensure it does more good than harm. Individual engineers can’t be expected to shoulder this burden alone, though.

“History indicates that profit hunting will often trump social responsibility in the corporate world,” the authors write. “And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they aren’t prepared.”

For this reason, they say, industry and academia need to devise a code of conduct similar to the Hippocratic Oath doctors are required to take, and rigorous ethical training needs to become standard when joining a company or laboratory

Five Sustainable Success Levers

Five Sustainable Success Levers

Nov 20, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Kevin Cashman

Curated by Helena M. Herrero Lamuedra

Every leader faces a daunting aspiration: Generate success now and then continuously accelerate itIt is hard enough to be successful and even more challenging to keep it going in today’s dynamic, change-rich world. As tough as our mandate is, I would suggest a sustainable simple success formula: purpose generates success, performance sustains it and ethics insures the first two endure.

Purpose is the creative force that elevates leaders and teams to move from short-term success to long-term significance. It engages and energizes workforces, customers, vendors, distributors, communities and stakeholders around a common mission, something bigger than products and larger than profit. It is the foundational meaning that unleashes latent energy and motivation as it generates enduring value. Purpose answers the essential question: Why is it so important that we exist? Ethics answers the enduring question: How are we in continuous service to our constituencies?    

As leaders, we have a responsibility to address this significant question,Why is it so important that we exist?” With this question, we courageously face who we are and how we are in the world. As we reflect on it and the battle that rages for the soul of capitalism, we also may want to consider: How do we view capitalism and the role of business? Will we define business solely in terms of transactional financial levers, designed to accumulate capital, or will we apply our vision to shape business as a more universal lever that serves a higher, more sustainable purpose? Will the top 2% serve the 98%, or will the top 2% dominate, control and be served by the 98%?

Unilever takes the universal levers of purpose and ethics and tries to serve the 100%. Their core values are much more than aspirational concepts. Their purpose statement is more than a slogan. Yes, they struggle to live it at times, but the constant struggle to serve is a worthy value-creating goal. As purpose-driven leaders remind themselves over and over again: purpose and ethics are not perfection, but the pursuit of service-fueled value.

Dedicating themselves to the core values of “integrity, responsibility, respect and pioneering,” Unilever’s core purpose keeps them focused on succeeding “with the highest standards of corporate behavior towards everyone we work with, the communities we touch, and the environment on which we have an impact.” There is no company-centric charge to be “#1 based on financial metrics” or “winning is all that matters” in their purpose statement. Their considerable success is driven by an ethical conviction to serve.

Paul Polman, CEO of Unilever, expressed his genuine belief and conviction in purpose-driven leadership and the power of service a Huffington Post article, “Doing Well by Doing Good”: “The power of purpose, passion, and positive attitude drive great long-term business results. Above all, the moment you realize that it’s not about yourself but about working for the common good, or helping others, that’s when you unlock the true leader in yourself.” When purpose becomes personal, it becomes real, powerful and ethical.

Recently, Unilever recruited Marijn Dekkers, another purpose-driven leader, to be Chairman of its board. Like Polman has done through his leadership, Marijn created significant enduring value during his tenure as CEO of Bayer. His leadership brought vitality and relevance to Bayer’s purpose; to their culture, their leadership growth, and to their market value. Commenting on this purpose-driven value creation, Marijn shared with me recently, “It is relatively easy to pull financial levers to generate short-term profit. Many people can do that. What is challenging, and the real skill of leadership, is to inspire sustainable growth by relentlessly serving employees, customers, vendors, communities, and the planet. When purpose becomes the generator of profit, then long-term success, service and sustainability have a chance to be realized.”

Expanding on the value-generating power of ethics and purpose, Marijn shared five levers for sustainable leadership success:

• EBITDA Never Inspires: “After a few years, no one remembers the number, but everyone remembers the contributions the products and services have made to the lives of people. Spreadsheets rarely inspire; stories of service move us in a memorable manner.”

• Take the Extra Steps: “Do the right thing before you are forced to do so. Purpose is real, and ethics is operating, when companies go beyond what they need to do for employees, vendors, customers and communities. Even 2% more effort on purpose creates multiple returns for everyone involved. It takes so little but returns so much. Being a good citizen on the things we do not make money on, can actually create more lasting value in the long run.”

• Build Authentic Reputation: “Reputations are not merely a public relations exercise. Reputations are built through ensuring that we are customer and enterprise-serving and not self-serving. Corporations are too often seen as self-serving, so attending to real-service is the counter-balance to negative reputations. The equity of our brand is built through living our purpose in very tangible ways.”

• Do the Right Thing When No One is Looking: Marijn shared a recent story of cycling along a river and wanting to dispose of his stale chewing gum. He realized that there were at least three options: 1) Throw it on the grass and mindlessly riding on; 2) Wait for a trash bin to come along and throw it at the bin but very likely someone would need to clean up the mess later;  3) Stop to find a leaf, roll up the gum in the leaf and dispose of the gum properly. “It took a small sacrifice to find the leaf and carefully dispose of it. But it was clearly the right thing to do.” Real ethics show up in both small and big acts of service.

• Remember Others: “Ethics is remembering others. Lack of ethics and purpose is placing self over service. As a CEO this is tough, since there are so many “others” to consider. But making the attempt to serve as many “others” as possible is the ethically fueled purpose of leadership.”

Purposeful, ethical leadership is a conscious act of self-examination to insure that our behaviors are really serving people – especially when no one is watching.

What steps can you take today to inspire purpose and remember ethics?

Turning the linear circular: the future of the global economy, leveraging Internet of Things

Turning the linear circular: the future of the global economy, leveraging Internet of Things

CE

Jun 5, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Mark Esposito

Curated by Helena M. Herrero Lamuedra

Institutions, both in the private and public sector, can always reap the public relations benefits of doing good, even while still accomplishing their goals. As resources become scarcer, a major way to enhance social performance is through resource conservation, which is being underutilized.

Although the traditional model of the linear economy has worked forever, and will never be fully replaced, it is essentially wasteful. The circular economy, in comparison, which involves resources and capital goods reentering the system for reuse instead of being discarded, saves on production costs, promotes recycling, decreases waste, and enhances social performance. When CE models are combined with IoT, internet connected devices that gather and relay data to central computers, efficiency skyrockets. As a result of finite resource depletion, the future economy is destined to become more circular. The economic shift toward CE will undoubtedly be hastened by the already ubiquitous presence of IoT, its profitability, and the positive public response it yields.

Unlike the linear economy which is a “take, make, dispose” model, the circular economy is an industrial economy that increases resource productivity with the intention of reducing waste and pollution. The main value drivers of CE are (1) extending use cycles lengths of an asset (2) increasing utilization of an asset (3) looping/cascading assets through additional use cycles (4) regeneration of nutrients to the biosphere.

The Internet of Things is the inter-networking of physical devices through electronics and sensors which are used to collect and exchange data. The main value drivers of IoT are the ability to define (1) location (2) condition (3) availability of the assets they monitor. By 2020 there are expected to be at least 20 million IoT connected devices worldwide.

The nexus between CEs and IoTs values drivers greatly enhances CE. If an institutions goals are profitability and conservation, IoT enables those goals with big data and analysis. By automatically and remotely monitoring the efficiency of a resource during harvesting, production, and at the end of its use cycle; all parts of the value chain can become more efficient.

When examining the value chain as a whole, the greatest uses for IoT is at its end. One way in which this is accomplished is through reverse logistics. Once the time comes for a user to discard their asset, IoT can aid in the retrieval of the asset so that it can be recycled into its components. With efficient reverse logistics, goods gain second life, less biological nutrients are extracted from the environment, and the looping/cascading of assets is enabled.

One way to change traditional value chain is the IoT enabled leasing model. Instead of selling an expensive appliance or a vehicle, manufacturers can willingly produce them with the intention of leasing to their customers. By imbedding these assets with IoT manufacturers can monitor the asset’s condition; thereby dynamically repairing the assets at precise times. In theory the quality of the asset will improve, since its in the producers best interest to make it durable rather than disposable and replaceable.

Even today, many sectors are already benefiting from IoT in resource conservation. In the energy sector, Barcelona has reduced its power grid energy consumption by 33%, while GE has begun using “smart” power meters that reduce customers power bills 10–20%. GE has also automated their wind turbines and solar panels; thereby automatically adjusting to the wind and angle of the sun.

In the built environment, cities like Hong Kong have implemented IoT monitoring for preventative maintenance of transportation infrastructure, while Rio de Janeiro monitors traffic patterns and crime at their central operations center. Mexico city has installed fans in their buildings which suck up local smog. In the waste management sector, San Francisco and London have installed solar-powered automated waste bins, that alert local authorities to when they are full; creating ideal routes for trash collection and reducing operational costs by 70%.

Despite the many advantages to this innovation, there are numerous current limitations. Due to difficulty in legislating for new technologies, Governmental regulation lags behind innovation. For example, because Brazil, China, and Russia do not have legal standards to distinguish re-manufactured products from used ones, cross-border reverse supply-chains are blocked. Reverse supply chains are also hurt by current lack of consumer demand , which is caused by low residual value of returned products. IoT technology itself, which collects so much data people’s private lives, generates major privacy concerns.

Questions arise like: who owns this data collected? How reliable are IoT dependent systems? How vulnerable to hackers are these assets? Despite the prevalence of IoT today, with 73% of companies invest in big data analytics, most of that data is merely used to detect and control anomalies and IoT remains vastly underutilized. Take an oil rig for example, it may have 30,000 sensors, but only 1% of them are examined. Underutilization of IoT in 2013 cost businesses an estimated 544 billion alone.

Even with these current barriers, because of the potential profits and increased social performance, the future implementation of an IoT enhanced CE is bright.

Estimates are that the potential profits from institutions adopting CE models could decrease costs by 20%, along with waste. The increase in efficiency combined with the goodwill generated by conservation is a win-win proposition for innovation, even with costs implementation, future monetary profitability will make it a no-brainer.

How to ensure future brain technologies will help and not harm society

How to ensure future brain technologies will help and not harm society

May 9, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Written by P. Murali Doraiswamy -Professor, Duke University, Hermann Garden – Organisation for Economic Co-operation and Development, and David Winickoff – Organisation for Economic Co-operation and Development

Curated by Helena M. Herrero Lamuedra

Thomas Edison, one of the great minds of the second industrial revolution, once said that “the chief function of the body is to carry the brain around.” Understanding the human brain – how it works, and how it is afflicted by diseases and disorders – is an important frontier in science and society today.

Advances in neuroscience and technology increasingly impact intellectual wellbeing, education, business, and social norms. Recent findings confirm the plasticity of the brain over the individual’s life. Imaging technologies and brain stimulation technologies are opening up totally new approaches in treating disease and potentially augmenting cognitive capacity. Unravelling the brain’s many secrets will have profound societal implications that require a closer “contract” between science and society.

Convergence across physical science, engineering, biological science, social science and humanities has boosted innovation in brain science and technological innovation. It offers large potential for a systems biology approach to unify heterogeneous data from “omics” tools, imaging technologies such as fMRI, and behavioural science.

Citizen science – the convergence between science and society – already proved successful in EyeWire where people competed to map the 1,000-neuron connectome of the mouse retina. Also, the use of nanoparticles as coating of implanted abiotic devices offers great potential to improve the immunologic acceptance of invasive diagnostics. Brain-inspired neuromorphic engineering aims to develop novel computer systems with brain-like characteristics, including low energy consumption, adequate fault tolerance, self-learning capabilities, and some sort of intelligence. Here, the convergence of nanotechnology with neuroscience could help building neuro-inspired computer chips; brain-machine interfaces and robots with artificial intelligence systems.

Future opportunities for cognitive enhancement for improved attentiveness, memory, decision making, and control through, for example, non-invasive brain stimulation and neural implants have raised, and shall continue to raise, profound ethical, legal, and social questions. What is societally acceptable and desirable, both now and in the future?

At a recent OECD workshop, we identified five possible systemic changes that could help speed up neurotechnology developments to meet pressing health challenges and societal needs.

1. Responsible research

There is growing interest in discussing and unpacking the ethical and societal aspects of brain science as the technologies and applications are developed. Much can be learned from other experiences in disruptive innovation. The international Human Genome Project (1990-2003), for example, was one of the earlier large-scale initiatives in which social scientists worked in parallel with the natural sciences in order to consider the ethical, legal and social issues (ELSI) of their work.

The deliberation of ELSI and Responsible Research and Innovation (RRI) in nanotechnologies is another example of how societies, in some jurisdictions, have approached R&D activities, and the role of the public in shaping, or at least informing, their trajectory. RRI knits together activities that previously seemed sporadic. According to Jack Stilgoe, Senior Lecturer in the Department of Science and Technology Studies, University College London, the aim of responsible innovation is to connect the practice of research and innovation in the present to the futures that it promises.

Frameworks, such as ELSI and RRI should more actively engage patients and patient organisations early in the development cycle, and in a meaningful way. This could be achieved through continuous public platforms and policy discussion instead of traditional one-off public engagement and the deliberation of scientific advances and ELSI through culture and art.

Research funders – public agencies, private investors, foundations, as well as universities themselves – are particularly well positioned to shape trajectories of technology and society. Through their funding power, they have unique capacity to help place scientific work within social, ethical, and regulatory contexts.

It is an opportune time for funders to: 1) strengthen the array of approaches and mechanisms for building a robust and meaningful neurotechnology landscape that meaningfully engages human values and is informed by it; 2) discuss options to foster open and responsible innovation; and 3) better understand the opportunities and challenges for building joint initiatives in research and product development.

2. Anticipatory governance

Society and industry would benefit from earlier, and more inclusive, discussions about the ethical, legal and social implications of how neurotechnologies are being developed and their entry onto the market. For example, the impact of neuromodulatory devices that promise to enhance cognition, alter mood, or improve physical performance on human dignity, privacy, and equitable access could be considered earlier in the research and development process.

3. Open innovation

Given the significant investment risks and high failure rates of clinical trials in central nervous systems disorders, companies could adopt more open innovation approaches in which public and private stakeholders actively collaborate, share assets including intellectual property, and invest together.

4. Avoiding neuro-hype

Popular media is full of colorful brain images used to illustrate stories about neuroscience. Unproven health claims, including those which give rise to so-called ‘neuro-hype’ and ‘neuro-myths’. Misinformation is a strong possibility where scientific work potentially carries major social implications (for example, work on mental illness, competency, intelligence, etc).

It has the potential to result in public mistrust and to undermine the formation of markets. There is a need for evidence-based policies and guidelines to help the responsible development and use of neurotechnology in medical practice and in over-the-counter products. Policymakers and regulators could lead the development of a clear path to translate neurotechnology discoveries into human health advantages that are commercially viable and sustainable.

5. Access and equity

Policymakers should discuss the socio-economic questions raised by neurotechnology. Rising disparities in access to often high-priced medical innovation require tailored solutions for poorer countries. The development of public-private partnerships and simplification of technology help access to innovation in resource-limited countries.

In addition to helping people with neurological and psychiatric disorders, the biggest cause of disability worldwide, neurotechnologies will shape every aspect of society in the future. A roadmap for guiding responsible research and innovation in neurotechnology may be transformative.