#AIforgood #ethics #leadership #digitaldisruption #digitaltransformation
Enterprises must confront the ethical implications of AI use as they increasingly roll out technology that has the potential to reshape how humans interact with machines
How AI systems can be biased
More human-like bots raise stakes for ethical AI use
Ethical AI is needed for broad AI adoption
#AIforgood #digitaltransformation #techdisruption #sustainabledevelopmentgoals
AI is not a silver bullet, but it could help tackle some of the world’s most challenging social problems.
First: Mapping AI use cases to domains of social good
Equality and inclusion
Health and hunger
Information verification and validation
Public and social-sector management
Security and justice
Second: AI capabilities that can be used for social good
Image classification and object detection are powerful computer-vision capabilities
Structured deep learning also may have social-benefit applications
Advanced analytics can be a more time- and cost-effective solution than AI for some use cases
Third: Overcoming bottlenecks, especially for data and talent
Data needed for social-impact uses may not be easily accessible
The expert AI talent needed to develop and train AI models is in short supply
‘Last-mile’ implementation challenges are also a significant bottleneck for AI deployment for social good
Fourth: Risks to be managed
Breaching the privacy of personal information could cause harm
Safe use and security are essential for societal good uses of AI
Decisions made by complex AI models will need to become more readily explainable
Fifth: Scaling up the use of AI for social good
Improving data accessibility for social-impact cases
Overcoming AI talent shortages is essential for implementing AI-based solutions for social impact
About the author(s)
Dec 7, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.
By Vyacheslav Polonski and Jane Zavalishina
Curated by Helena M. Herrero Lamuedra
Today, it is difficult to imagine a technology that is as enthralling and terrifying as machine learning. While media coverage and research papers consistently tout the potential of machine learning to become the biggest driver of positive change in business and society, the lingering question on everyone’s mind is: “Well, what if it all goes terribly wrong?”
For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. Alarmist views on the terrifying potential of general AI abound in the media.
More often than not, these dystopian prophecies have been met with calls for a more ethical implementation of AI systems; that somehow engineers should imbue autonomous systems with a sense of ethics. According to some AI experts, we can teach our future robot overlords to tell right from wrong, akin to a “good Samaritan AI” that will always act justly on its own and help humans in distress.
Although this future is still decades away, today there is much uncertainty as to how, if at all, we will reach this level of general machine intelligence. But what is more crucial, at the moment, is that even the narrow AI applications that exist today require our urgent attention in the ways in which they are making moral decisions in practical day-to-day situations. For example, this is relevant when algorithms make decisions about who gets access to loans or when self-driving cars have to calculate the value of a human life in hazardous situations.
Teaching morality to machines is hard because humans can’t objectively convey morality in measurable metrics that make it easy for a computer to process. In fact, it is even questionable whether we, as humans have a sound understanding of morality at all that we can all agree on. In moral dilemmas, humans tend to rely on gut feeling instead of elaborate cost-benefit calculations. Machines, on the other hand, need explicit and objective metrics that can be clearly measured and optimized. For example, an AI player can excel in games with clear rules and boundaries by learning how to optimize the score through repeated playthroughs.
After its experiments with deep reinforcement learning on Atari video games, Alphabet’s DeepMind was able to beat the best human players of Go. Meanwhile, OpenAI amassed “lifetimes” of experiences to beat the best human players at the Valve Dota 2 tournament, one of the most popular e-sports competitions globally.
But in real-life situations, optimization problems are vastly more complex. For example, how do you teach a machine to algorithmically maximize fairness or to overcome racial and gender biases in its training data? A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.
This has led some authors to worry that a naive application of algorithms to everyday problems could amplify structural discrimination and reproduce biases in the data they are based on. In the worst case, algorithms could deny services to minorities, impede people’s employment opportunities or get the wrong political candidate elected.
Based on our experiences in machine learning, we believe there are three ways to begin designing more ethically aligned machines:
1. Define ethical behavior
AI researchers and ethicists need to formulate ethical values as quantifiable parameters. In other words, they need to provide machines with explicit answers and decision rules to any potential ethical dilemmas it might encounter. This would require that humans agree among themselves on the most ethical course of action in any given situation – a challenging but not impossible task. For example, Germany’s Ethics Commission on Automated and Connected Driving has recommended to specifically programme ethical values into self-driving cars to prioritize the protection of human life above all else. In the event of an unavoidable accident, the car should be “prohibited to offset victims against one another”. In other words, a car shouldn’t be able to choose whether to kill one person based on individual features, such as age, gender or physical/mental constitution when a crash is inescapable.
2. Crowdsource our morality
Engineers need to collect enough data on explicit ethical measures to appropriately train AI algorithms. Even after we have defined specific metrics for our ethical values, an AI system might still struggle to pick it up if there is not enough unbiased data to train the models. Getting appropriate data is challenging, because ethical norms cannot be always clearly standardized. Different situations require different ethical approaches, and in some situations there may not be a single ethical course of action at all – just think about lethal autonomous weapons that are currently being developed for military applications. One way of solving this would be to crowdsource potential solutions to moral dilemmas from millions of humans. For instance, MIT’s Moral Machine project shows how crowdsourced data can be used to effectively train machines to make better moral decisions in the context of self-driving cars.
3. Make AI transparent
Policymakers need to implement guidelines that make AI decisions with respect to ethics more transparent, especially with regard to ethical metrics and outcomes. If AI systems make mistakes or have undesired consequences, we cannot accept “the algorithm did it” as an adequate excuse. But we also know that demanding full algorithmic transparency is technically untenable (and, quite frankly, not very useful). Neural networks are simply too complex to be scrutinized by human inspectors. Instead, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. For self-driving cars, for instance, this could imply that detailed logs of all automated decisions are kept at all times to ensure their ethical accountability.
We believe that these three recommendations should be seen as a starting point for developing ethically aligned AI systems. Failing to imbue ethics into AI systems, we may be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. For example, in an unavoidable accident situation, self-driving cars will need to make some decision for better or worse. But if the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm. This means that we cannot simply refuse to quantify our values. By walking away from this critical ethical discussion, we are making an implicit moral choice. And as machine intelligence becomes increasingly pervasive in society, the price of inaction could be enormous – it could negatively affect the lives of billions of people.
Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimized. For AI engineers, this may seem like a daunting task. After all, defining moral values is a challenge mankind has struggled with throughout its history. Nevertheless, the state of AI research requires us to finally define morality and to quantify it in explicit terms. Engineers cannot build a “good samaritan AI”, as long as they lack a formula for the good samaritan human.
Nov 27, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.
By Edd Gent
Curated by Helena M. Herrero Lamuedra
For some die-hard tech evangelists, using neural interfaces to merge with AI is the inevitable next step in humankind’s evolution. But a group of 27 neuroscientists, ethicists, and machine learning experts have highlighted the myriad ethical pitfalls that could be waiting.
To be clear, it’s not just futurologists banking on the convergence of these emerging technologies. The Morningside Group estimates that private spending on neurotechnology is in the region of $100 million a year and growing fast, while in the US alone public funding since 2013 has passed the $500 million mark.
The group is made up of representatives from international brain research projects, tech companies like Google and neural interface startup Kernel, and academics from the US, Canada, Europe, Israel, China, Japan, and Australia. They met in May to discuss the ethics of neuro-technology and AI, and have now published their conclusions in the journal Nature.
While the authors concede it’s likely to be years or even decades before neural interfaces are used outside of limited medical contexts, they say we are headed towards a future where we can decode and manipulate people’s mental processes, communicate telepathically, and technologically augment human mental and physical capabilities.
“Such advances could revolutionize the treatment of many conditions…and transform human experience for the better,” they write. “But the technology could also exacerbate social inequalities and offer corporations, hackers, governments, or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency, and an understanding of individuals as entities bound by their bodies.”
“The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias.”
The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias. The first and last topics are already mainstays of warnings about the dangers of unregulated and unconscientious use of machine learning, and the problems and solutions the authors highlight are well-worn.
On privacy, the concerns are much the same as those raised about the reams of personal data corporations and governments are already hovering up. The added sensitivity of neural data makes the suggestion of an automatic opt-out for sharing of neural data and bans on individuals selling their data more feasible.
But other suggestions to use technological approaches to better protect data like “differential privacy,” “federated learning,” and blockchain are equally applicable to non-neural data. Similarly, the ability of machine learning algorithms to pick up bias inherent in training data is already a well-documented problem, and one with ramifications that go beyond just neuro-technology.
When it comes to identity, agency, and augmentation, though, the authors show how the convergence of AI and neuro-technology could result in entirely novel challenges that could test our assumptions about the nature of the self, personal responsibility, and what ties humans together as a species.
They ask the reader to imagine if machine learning algorithms combined with neural interfaces allowed a form of ‘auto-complete’ function that could fill the gap between intention and action, or if you could telepathically control devices at great distance or in collaboration with other minds. These are all realistic possibilities that could blur our understanding of who we are and what actions we can attribute as our own.
The authors suggest adding “neurorights” that protect identity and agency to international treaties like the Universal Declaration of Human Rights, or possibly the creation of a new international convention on the technology. This isn’t an entirely new idea; in May, I reported on a proposal for four new human rights to protect people against neural implants being used to monitor their thoughts or interfere with or hijack their mental processes.
But these rights were designed primarily to protect against coercive exploitation of neuro-technology or the data it produces. The concerns around identity and agency are more philosophical, and it’s less clear that new rights would be an effective way to deal with them. While the examples the authors highlight could be forced upon someone, they sound more like something a person would willingly adopt, potentially waiving rights in return for enhanced capabilities.
The authors suggest these rights could enshrine a requirement to educate people about the possible cognitive and emotional side effects of neuro-technologies rather than the purely medical impacts. That’s a sensible suggestion, but ultimately people may have to make up their own minds about what they are willing to give up in return for new abilities.
This leads to the authors’ final area of concern—augmentation. As neuro-technology makes it possible for people to enhance their mental, physical, and sensory capacities, it is likely to raise concerns about equitable access, pressure to keep up, and the potential for discrimination against those who don’t. There’s also the danger that military applications could lead to an arms race.
“The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground.”
The authors suggest that guidelines should be drawn up at both the national and international levels to set limits on augmentation in a similar way to those being drawn up to control gene editing in humans, but they admit that “any lines drawn will inevitably be blurry.” That’s because it’s hard to predict the impact these technologies will have and building international consensus will be hard because different cultures lend more weight to things like privacy and individuality than others.
The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground. In the end, they conclude that it may come down to the developers of the technology to ensure it does more good than harm. Individual engineers can’t be expected to shoulder this burden alone, though.
“History indicates that profit hunting will often trump social responsibility in the corporate world,” the authors write. “And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they aren’t prepared.”
For this reason, they say, industry and academia need to devise a code of conduct similar to the Hippocratic Oath doctors are required to take, and rigorous ethical training needs to become standard when joining a company or laboratory
Oct 18, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.
By Dan Clay
Curated by Helena M. Herrero Lamuedra
Meet Dawn. Her T-shirt is connected to the internet, and her tattoo unlocks her car door. She’s never gone shopping, but she gets a package on her doorstep every week. She’s never been lost or late, and she’s never once waited in line. She never goes anywhere without visiting in VR first, and she doesn’t buy anything that wasn’t made just for her.
Dawn is an average 25-year-old in the not-so-distant future. She craves mobility, flexibility, and uniqueness; she spends more on experience than she does on products; she demands speed, transparency, and control; and she has enough choice to avoid any company that doesn’t give her what she wants. We’re in the midst of remarkable change not seen since the Industrial Revolution, and a noticeable gap is growing between what Dawn wants and what traditional retailers provide.
In 2005 Amazon launched free two-day shipping. In 2014 it launched free two-hour shipping. It’s hard to get faster than “Now,” and once immediacy becomes table stakes, competition will move to prediction. By intelligently applying data from our connected devices, smart digital assistants will be able to deliver products before we even acknowledge the need: Imagine a pharmacy that knows you’re about to get sick; an electronics retailer that knows you forgot your charger; an online merchant that knows you’re out of toilet paper; and a subscription service that knows you have a wedding coming up, have a little extra in your bank account, and that you look good in blue. Near-perfect predictions are the future of retail, and it’s up to CX and UX designers to ensure that they are greeted as miraculous time-savers rather than creepy intrusions.
Every product is personalized
While consumers are increasingly wary about how much of their personal data is being tracked, they’re also increasingly willing to trade privacy for more tangible benefits. It then falls on companies to ensure those benefits justify the exchange. In the retail space this increasingly means perfectly tailored products and a more personally relevant experience. Etsy recently acquired an AI startup to make its search experience more relevant and tailored. HelloAva provides customers with personalized skincare product recommendations based on machine learning combined with a few texts and a selfie. Amazon, constantly at the forefront of customer needs, recently acquired a patent for a custom clothing manufacturing system.
Market to the machines
Dawn, our customer of the future, won’t need to customize all of her purchases; for many of her needs, she’ll give her intelligent, IoT-enabled agent (think Alexa with a master’s degree) personalized filters so the agent can buy on her behalf. When Siri is choosing which shoes to rent, the robot almost becomes the customer, and retailers must win over smart AI assistants before they even reach end customers. Netflix already has a team of people working on this new realm of marketing to machines. As CEO Reed Hastings quipped at this year’s Mobile World Congress, “I’m not sure if in 20 to 50 years we are going to be entertaining you, or entertaining AIs.”
Branded, immersive experiences matter more than ever
As online shopping and automation increase, physical retail spaces will have to deliver much more than just a good shopping experience to compel people to visit. This could be through added education (like the expert stylists at Nordstrom’s store without any merchandise) or heightened service personalization (like Asics on-site 3D foot mapping and gait cycle analysis) or constantly evolving entertainment (like Gentle Monster’s Seoul flagship store’s monthly changing “exhibition“).
In this context, brand is becoming more than a value proposition or signifier—it’s the essential ingredient preventing companies from becoming commoditized by an on-demand, automated world where your car picks its own motor oil. Brands have a vital responsibility to create a community for customers to belong to and believe in.
A mobile world that feels like a single channel experience
Dawn will be increasingly mobile, and she’ll expect retailers to move along with her. She may research dresses on her phone and expect the store associate to know what she’s looked at. It’s no secret that mobile shopping is continuing to grow, but retailers need to think less about developing separate strategies for their channels and more about maintaining a continuous flow with the one channel that matters: the customer channel.
WeChat, for example, China’s largest social media channel, is used for everything from online shopping and paying at supermarkets to ordering a taxi and getting flight updates, creating a seamless “single channel” experience across all interactions. Snapchat’s new Context Cards, allowing users to read location-based reviews, business information and hail rides all within the app, builds towards a similar, single channel experience.
The future promises profound change. Yet perhaps the most pressing challenge for retailers is keeping up with customers’ expectations for immediacy, personalization, innovative experiences, and the other myriad ways technological and societal changes are making Dawn the most demanding customer the retail industry has ever seen. The future is daunting, but it’s also full of opportunity, and the retailers that can anticipate the needs of the customer of the future are well-poised for success in the years to come.
May 15, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.
By Scott Scalon, Hunt Scalon Media
Curated by Helena M. Herrero Lamuedra
Companies are facing a radically shifting context for the workforce, the workplace, and the world of work, and these shifts have already changed the rules for nearly every organizational people practice, from learning and management to executive recruiting and the definition of work itself. Every business leader, no matter their function or industry, has experienced some form of radical work transformation, whether it be digitally in the form of social media, for example, demographically, or in countless other ways. Old paradigms are out, new ways of thinking are in — and talent, that one ‘commodity’ we’re all after is caught up in the middle of it all.
Almost 90 percent of HR and business leaders rate building the organization of the future as their highest priority, according to Deloitte’s latest Global Human Capital Trends report, “Rewriting the Rules for the Digital Age.” In the report, Deloitte issues a call-to-action for companies to completely reconsider their organizational structure, talent and HR strategies to keep pace with the disruption.
A Networked World of Work
“Technology is advancing at an unprecedented rate and these innovations have completely transformed the way we live, work and communicate,” said Josh Bersin, principal and founder, Bersin by Deloitte, Deloitte Consulting. “Ultimately, the digital world of work has changed the rules of business. Organizations should shift their entire mind-set and behaviors to ensure they can lead, organize, motivate, manage and engage the 21st century workforce, or risk being left behind.”
With more than 10,000 HR and business leaders in 140 countries weighing in, this massive study reveals that business leaders are turning to new organization models, which highlight the networked nature of today’s world of work. However, as business productivity often fails to keep pace with tecnological progress, Deloitte finds that HR leaders are struggling to keep up, with only 35 percent of them rating their capabilities as ‘good’ or ‘excellent.’
“As technology, artificial intelligence, and robotics transform business models and work, companies should start to rethink their management practices and organizational models,” said Brett Walsh, global human capital leader for Deloitte Global. “The future of work is driving the development of a set of ‘new rules’ that organizations should follow if they want to remain competitive.”
Talent Acquisition: Biggest Issue Facing Companies
As the workforce evolves, organizations are focusing on networks of teams, and recruiting and developing the right people is more consequential than ever. However, while Deloitte finds that cognitive technologies have helped leaders bring talent acquisition into the digital world, only 22 percent of survey respondents describe their companies as ‘excellent’ at building a differentiated employee experience once talent is acquired. In fact, the gap between talent acquisition’s importance and the ability to meet the need increased over last year‘s survey.
How Else the World of Work Is Changing
It is, indeed, a landscape of shifting priorities, and nowhere are we seeing this unfold more than among the group that matters most: job candidates. Five years ago, benefits topped their list of preferences. Today it’s culture and flexibility. Organizations need talented employees to drive strategy and achieve goals, but finding, recruiting and retaining people is becoming more difficult. While the severity of the issue varies among organizations, industries and geographies, it’s clear that this new landscape has created new demands. And organizations are scrambling.
It is critical, according to the report, to take an integrated approach to building the employee experience, with a large part of it centering on ‘careers and learning,’ which rose to second place on HRs’ and business leaders’ priority lists, with 83 percent of those surveyed ranking it as ‘important’ or ‘very important.’ Deloitte finds that as organizations shed legacy systems and dismantle yesterday’s hierarchies, it’s important to place a higher premium on implementing immersive learning experiences to develop leaders who can thrive in today’s digital world and appeal to diverse workforce needs.
The importance of leadership as a driver of the employee experience remains strong, as the percentage of companies with experiential programs for leaders rose nearly 20 percentage points from 47 percent in 2015 to 64 percent this year. Deloitte believes there is still a crucial need, however, for stronger and different types of leaders, particularly as today’s business world demands those who demonstrate more agile and digital capabilities.
Time to Rewrite the Rules
As organizations become more digital, leaders should consider disruptive technologies for every aspect of their human capital needs. Deloitte finds that 56 percent of companies are redesigning their HR programs to leverage digital and mobile tools, and 33 percent are already using some form of artificial intelligence (AI) applications to deliver HR solutions.
“HR and other business leaders tell us that they are being asked to create a digital workplace in order to become an ‘organization of the future,’” said Erica Volini, a principal with Deloitte Consulting LLP, and national managing director of the firm’s U.S. human capital practice. “To rewrite the rules on a broad scale, HR should play a leading role in helping the company redesign the organization by bringing digital technologies to both the workforce and to the HR organization itself.”
Deloitte found that the HR function is in the middle of a wide-ranging identity shift. To position themselves effectively as a key business advisor to the organization, it is important for HR to focus on service delivery efficiency and excellence in talent programs, as well as the entire design of work using a digital lens.
How Jobs Are Being Reinvented
While many jobs are being reinvented through technology and some tasks are being automated, Deloitte’s research shows that the essentially human aspects of work – such as empathy, communication, and problem solving – are becoming more important than ever.
This shift is not only driving an increased focus on reskilling, but also on the importance of people analytics to help organizations gain even greater insights into the capabilities of their workforce on a global scale. However, organizations continue to fall short in this area, with only eight percent reporting they have usable data, and only nine percent believing they have a good understanding of the talent factors that drive performance in this new world of work.
One of the new rules for the digital age is to expand our vision of the workforce; think about jobs in the context of tasks that can be automated (or outsourced) and the new role of human skills; and focus even more heavily on the customer experience, employee experience, and employment value proposition for people.
This challenge requires major cross-functional attention, effort, and collaboration. It also represents one of the biggest opportunities for the HR organization. To be able to rewrite the rules, HR needs to prove it has the insights and capabilities to successfully play outside the lines.