Yoga as a Remedy for Our Stressed, Sedentary Digital Age

Yoga as a Remedy for Our Stressed, Sedentary Digital Age

Most of us spend the majority of our days on our phones, computers, tablets, and in front of our TVs. We also spend the majority of our days sitting or reclining, whether in our cars, at our desks, or on our couches. Just as humans are not meant to be wired all the time, we are not meant to be sedentary for most of our days. It’s not a coincidence that we are restless, stressed, anxious, and suffer constant back and pains.

Yoga can alleviate the stress, anxiety, and aches and pains that come with the digital age, says Peter Mico, a yoga leader and studio owner in Idaho. One of his specialties is training and teaching students with chronic pain. He is also the operator of Blue Earth Yoga, an institute for yoga, health, and longevity which holds retreats around the world that include Blue Zones principles and education. Some of these retreats are also held in blue zones regions. We recently talked with Peter about yoga, the Blue Zones lifestyle, and the yoga moves you can do anywhere, even at work.

How do you see yoga and Blue Zones research intersecting?

PETER MICO: Yoga is more than just a good workout. Just like some of the daily schedules and habits of the elder inhabitants in blue zones, yoga combines movement and stress relief. It’s about being mindful, being in the body, and being in the moment. In my experiences in the blue zones, the older generation is wonderfully grounded and present. So the practice of yoga helps brings us to a place that these cultures have achieved through their way of life, and one that is very different from our own modern lifestyles of constant distraction and stress.

In our society, it’s common for older people to fall and break a hip. Not so often in the blue zones regions. As Dan Buettner has showed us, centenarians in the world’s blue zones are gardening, weeding, and doing yard work well into their 90’s and 100’s. They haven’t spent their lives sitting in cars and desks, they’re regularly getting up and down from the ground. In this way, it’s as if they are practicing yoga all day and every day, promoting good muscle tone and strong bones with full-body movement.

Also, even though yoga is not a religion, it can be a spiritual practice. Even the practice of learning to breathe slowly and deeply from your diaphragm as you do in yoga is like meditation, besides being invigorating and helping to relieve stress. Blue Zones centenarians had spiritual lives even though they came from different religions, and reaped the benefits of regular prayer, meditation, and spiritual rituals.

Besides stress relief and learning to breathe properly, what are some of the other benefits of yoga?

PM: Driving in cars, sitting in the lounge chair watching TV, or hunched over a computer all day creates multiple problems for the spine. That’s a big reason why probably 80% of Americans suffer from lower back pain. Yoga can be very helpful to people with lower back problems, and as a preventive measure so you don’t develop back problems. Its emphasis on posture and alignment, particularly in the sacral complex, is the perfect remedy for these ailments of pain and discomfort. People come to us with major maladies of herniated disks, scoliosis and chronic muscular pain, and find relief after a steady practice of yoga.

The same is true of ‘mouse arm’ and the effects on the cervical spine, which is a big deal.  Allowing the head to hang forward toward the screen, then tilting to look up, then extending the mouse arm forward, and then holding the pose for hours is a recipe for disaster for the cervical spine, especially the C4, C5, and C6 vertebrae. Yoga is a powerful practice for promoting healthy neck care.

 

Office, Desk, or Cubicle Yoga: 4 Essential Moves to Reverse “Computer Crouch” and “Mouse Arm”

For a typical office job of answering telephones and working at a computer, there are a couple of poses that you should do often.

Every 15 Minutes, Sitting Moves:

1. Elbow Hold:

Put your arms up over your head and hold your opposite elbows. Then move your held elbows in four directions: forward and backwards, from side to side, and in small back and forward bends. Do this for 20-30 seconds every 15 minutes.

 

 

 

2. Arm Twists:

Put your arms straight out to the sides with your thumbs up. Rotate our arms forward and then backwards so your thumbs are moving in a circular motion. Do this 10 times. Then repeat with your arms rotating in opposite directions from each other. Do this 10 times as well.

 

30 Minutes, Standing Moves:

1. Baby Backbends: Stand up and clasp your hands behind your back. Arch backwards gently as you open your chest and roll your shoulders back and behind you. Then turn your head side to side, 5 times. Then bend your ear towards your shoulder, 5 times on each side.

 

2.Arm Circles: Put your right hand on your right shoulder. Extend your left arm straight out to the side and bend your wrists so your fingers point towards the floor. Move your left arm around in a circle about 5 times each way. Then repeat this on your right side.

 

What are some yoga myths that you want to debunk for our readers?

 

PM: One is that yoga is just for women. Many women have flexibility and come to yoga for strength. Often men come to the studio with some strength, but are seeking or needing flexibility. People seem to think they shouldn’t come to class unless they are flexible. But class is where you get flexible. It would be like saying you won’t go to the gym because you don’t have muscles.

Another myth is that yoga means contortionism. I don’t believe in celebrating just the big crazy poses or the yoga competitiveness of this body-centric society we live in. I once overheard Richard Freeman (a master yogi) tell another teacher that the most beautiful pose he ever saw was an 80-year-old man doing a backbend. No airs, just a simple backbend with mindfulness. Beautiful.

 

 

 

 

 

 

 

Artificial Intelligence and Ethics

Artificial Intelligence and Ethics

#AIforgood #ethics #leadership #digitaldisruption #digitaltransformation

Enterprises must confront the ethical implications of AI use as they increasingly roll out technology that has the potential to reshape how humans interact with machines

Many enterprises are exploring how AI can help move their business forward, save time and money, and provide more value to all their stakeholders. However, most companies are missing the conversation about the ethical issues of AI use and adoption.
Even at this early stage of AI adoption, it’s important for enterprises to take ethical and responsible approaches when creating AI systems because the industry is already starting to see backlash against AI implementations that play loose with ethical concerns.
For example, Google recently saw pushback with its Google Duplex release that seems to show AI-enabled systems pretending to be humans. Microsoft saw significant issues with its Tay bot that started going off the rails. And, of course, who can ignore what Elon Musk and others are saying about the use of AI.
Yet enterprises are already starting to pay attention to the ethical issues of AI use. Microsoft, for example, has created the AI and Ethics in Engineering and Research Committee to make sure the company’s core values are included in the AI systems it creates.

How AI systems can be biased

AI systems can quickly find themselves in ethical trouble when left inadequately supervised. One notable example was Google’s image recognition tool mistakenly classifying black people as gorillas, and the aforementioned Tay chatbot becoming a racist, sexist bigot.
How could this happen? Plainly put, AI systems are only as good as their training data, and that training data has bias. Just like humans, AI systems need to be fed data and told what that data is in order to learn from it.
What happens when you feed biased training data to a machine is predictable: biased results. Bias in AI systems often stems from inherent human bias. When technologists build systems around their own experience — even when Silicon Valley has a notable diversity problem — or when they use training data that has had human bias involved historically, the data tends to reflect the lack of diversity or systemic bias.
Some of these AI technologies can have ethical implications.
Because of this, systems inherit this bias and start to erode the trust of users. Companies are starting to realize that if they plan to gain adoption of their AI systems and realize ROI, those AI systems must be trustworthy. Without trust, they won’t be used, and then the AI investment will be a waste.
Companies are combating inherent data bias by implementing programs to not only broaden the diversity of their data sets, but also the diversity of their teams. More diversity on teams enables a diverse group of people to feed systems different data points from which to learn. Organizations like AI4ALL are helping enterprises meet both of these anti-bias goals.

More human-like bots raise stakes for ethical AI use

At Google’s I/O event earlier this month, the company demoed Google Duplex, an experimental Google voice assistant that was shown via a prerecorded interaction of the system placing a phone call to a hair salon on a human agent’s behalf. The system did a reasonable enough job impersonating a human, even adding umms and mm-hmms, that the human on the other side was suitably fooled into thinking she was talking to another human.
This demo raised a number of significant and legitimate ethical issues of AI use. Why did the Duplex system try to fake being human? Why didn’t it just identity itself as a bot upfront? Is it OK to fool humans into thinking they’re talking to other humans?
Putting bots like this out into the real world where they pretend to be human, or even pretend to take over the identity of an actual human, can be a big problem. Humans don’t like being fooled. There’s already significant erosion in trust in online systems with people starting to not believe what they read, see or hear.
With bots like Duplex on the loose, people will soon stop believing anyone or anything they interact with via phone. People want to know who they are talking to. They seem to be fine with talking to humans or bots as long as the other party truthfully identifies itself.

Ethical AI is needed for broad AI adoption

Many in the industry are pursuing the creation of a code of ethics for bots to address potential issues, malicious or benign, that could arise, and to help us address them now before it’s too late. This code of ethics wouldn’t just address legitimate uses of bot technology, but also intentionally malicious uses of voice bots.
Imagine a malicious bot user instructing the tool to ask a parent to pick up their sick child at school in order to get them out of their house so a criminal can come in while they aren’t home and rob them. Bot calls from competing restaurants could make fake reservations, preventing actual customers from getting tables.
Also concerning are information disclosure issues and laws that are not up to date to deal with voice bots. For example, does it violate HIPAA laws for bots to call your doctor’s office to make an appointment and ask for medical information over the phone?
Forward-thinking companies see the need to create AI systems that address ethics and bias issues, and are taking active measures now. These enterprises have learned from previous cybersecurity issues that addressing trust-related concerns as an afterthought comes at a significant risk. As such, they are investing time and effort to address ethics concerns now before trust in AI systems is eroded to the point of no return. Other businesses should do so, too.
Artificial Intelligence for Social Good

Artificial Intelligence for Social Good

 #AIforgood #digitaltransformation #techdisruption #sustainabledevelopmentgoals

AI is not a silver bullet, but it could help tackle some of the world’s most challenging social problems.

Artificial intelligence (AI) has the potential to help tackle some of the world’s most challenging social problems. To analyze potential applications for social good, we compiled a library of about 160 AI social-impact use cases. They suggest that existing capabilities could contribute to tackling cases across all 17 of the UN’s sustainable-development goals, potentially helping hundreds of millions of people in both advanced and emerging countries.
Real-life examples of AI are already being applied in about one-third of these use cases, albeit in relatively small tests. They range from diagnosing cancer to helping blind people navigate their surroundings, identifying victims of online sexual exploitation, and aiding disaster-relief efforts (such as the flooding that followed Hurricane Harvey in 2017). AI is only part of a much broader tool kit of measures that can be used to tackle societal issues, however. For now, issues such as data accessibility and shortages of AI talent constrain its application for social good.
 The article is divided into five sections:

First: Mapping AI use cases to domains of social good

For the purposes of this research, we defined AI as deep learning. We grouped use cases into ten social-impact domains based on taxonomies in use among social-sector organizations, such as the AI for Good Foundation and the World Bank. Each use case highlights a type of meaningful problem that can be solved by one or more AI capability. The cost of human suffering, and the value of alleviating it, are impossible to gauge and compare. Nonetheless, employing usage frequency as a proxy, we measure the potential impact of different AI capabilities.
For about one-third of the use cases in our library, we identified an actual AI deployment (Exhibit 1). Since many of these solutions are small test cases to determine feasibility, their functionality and scope of deployment often suggest that additional potential could be captured. For three-quarters of our use cases, we have seen solutions deployed that use some level of advanced analytics; most of these use cases, although not all, would further benefit from the use of AI techniques.

Crisis response

These are specific crisis-related challenges, such as responses to natural and human-made disasters in search and rescue missions, as well as the outbreak of disease. Examples include using AI on satellite data to map and predict the progression of wildfires and thereby optimize the response of firefighters. Drones with AI capabilities can also be used to find missing persons in wilderness areas.

Economic empowerment

With an emphasis on currently vulnerable populations, these domains involve opening access to economic resources and opportunities, including jobs, the development of skills, and market information. For example, AI can be used to detect plant damage early through low-altitude sensors, including smartphones and drones, to improve yields for small farms.

Educational challenges

These include maximizing student achievement and improving teachers’ productivity. For example, adaptive-learning technology could base recommended content to students on past success and engagement with the material.

Environmental challenges

Sustaining biodiversity and combating the depletion of natural resources, pollution, and climate change are challenges in this domain. (See Exhibit 2 for an illustration on how AI can be used to catch wildlife poachers.) The Rainforest Connection, a Bay Area nonprofit, uses AI tools such as Google’s TensorFlow in conservancy efforts across the world. Its platform can detect illegal logging in vulnerable forest areas by analyzing audio-sensor data.

Equality and inclusion

Addressing challenges to equality, inclusion, and self-determination (such as reducing or eliminating bias based on race, sexual orientation, religion, citizenship, and disabilities) are issues in this domain. One use case, based on work by Affectiva, which was spun out of the MIT Media Lab, and Autism Glass, a Stanford research project, involves using AI to automate the recognition of emotions and to provide social cues to help individuals along the autism spectrum interact in social environments.

Health and hunger

This domain addresses health and hunger challenges, including early-stage diagnosis and optimized food distribution. Researchers at the University of Heidelberg and Stanford University have created a disease-detection AI system—using the visual diagnosis of natural images, such as images of skin lesions to determine if they are cancerous—that outperformed professional dermatologists. AI-enabled wearable devices can already detect people with potential early signs of diabetes with 85 percent accuracy by analyzing heart-rate sensor data. These devices, if sufficiently affordable, could help more than 400 million people around the world afflicted by the disease.

Information verification and validation

This domain concerns the challenge of facilitating the provision, validation, and recommendation of helpful, valuable, and reliable information to all. It focuses on filtering or counteracting misleading and distorted content, including false and polarizing information disseminated through the relatively new channels of the internet and social media. Such content can have severely negative consequences, including the manipulation of election results or even mob killings, in India and Mexico, triggered by the dissemination of false news via messaging applications. Use cases in this domain include actively presenting opposing views to ideologically isolated pockets in social media.

Infrastructure management

This domain includes infrastructure challenges that could promote the public good in the categories of energy, water and waste management, transportation, real estate, and urban planning. For example, traffic-light networks can be optimized using real-time traffic camera data and Internet of Things (IoT) sensors to maximize vehicle throughput. AI can also be used to schedule predictive maintenance of public transportation systems, such as trains and public infrastructure (including bridges), to identify potentially malfunctioning components.

Public and social-sector management

Initiatives related to efficiency and the effective management of public- and social-sector entities, including strong institutions, transparency, and financial management, are included in this domain. For example, AI can be used to identify tax fraud using alternative data such as browsing data, retail data, or payments history.

Security and justice

This domain involves challenges in society such as preventing crime and other physical dangers, as well as tracking criminals and mitigating bias in police forces. It focuses on security, policing, and criminal-justice issues as a unique category, rather than as part of public-sector management. An example is using AI and data from IoT devices to create solutions that help firefighters determine safe paths through burning buildings.
The United Nations’ Sustainable Development Goals (SDGs) are among the best-known and most frequently cited societal challenges, and our use cases map to all 17 of the goals, supporting some aspect of each one (Exhibit 3). Our use-case library does not rest on the taxonomy of the SDGs, because their goals, unlike ours, are not directly related to AI usage; about 20 cases in our library do not map to the SDGs at all. The chart should not be read as a comprehensive evaluation of AI’s potential for each SDG; if an SDG has a low number of cases, that reflects our library rather than AI’s applicability to that SDG.

Second: AI capabilities that can be used for social good

We identified 18 AI capabilities that could be used to benefit society. Fourteen of them fall into three major categories: computer vision, natural-language processing, and speech and audio processing. The remaining four, which we treated as stand-alone capabilities, include three AI capabilities: reinforcement learning, content generation, and structured deep learning. We also included a category for analytics techniques.
When we subsequently mapped these capabilities to domains (aggregating use cases) in a heat map, we found some clear patterns

Image classification and object detection are powerful computer-vision capabilities

Within computer vision, the specific capabilities of image classification and object detection stand out for their potential applications for social good. These capabilities are often used together—for example, when drones need computer vision to navigate a complex forest environment for search-and-rescue purposes. In this case, image classification may be used to distinguish normal ground cover from footpaths, thereby guiding the drone’s directional navigation, while object detection helps circumvent obstacles such as trees.
Some of these use cases consist of tasks a human being could potentially accomplish on an individual level, but the required number of instances is so large that it exceeds human capacity (for example, finding flooded or unusable roads across a large area after a hurricane). In other cases, an AI system can be more accurate than humans, often by processing more information (for example, the early identification of plant diseases to prevent infection of the entire crop).
Computer-vision capabilities such as the identification of people, face detection, and emotion recognition are relevant only in select domains and use cases, including crisis response, security, equality, and education, but where they are relevant, their impact is great. In these use cases, the common theme is the need to identify individuals, most easily accomplished through the analysis of images. An example of such a use case would be taking advantage of face detection on surveillance footage to detect the presence of intruders in a specific area. (Face detection applications detect the presence of people in an image or video frame and should not be confused with facial recognition, which is used to identify individuals by their features.)

Natural-language processing

Some aspects of natural-language processing, including sentiment analysis, language translation, and language understanding, also stand out as applicable to a wide range of domains and use cases. Natural-language processing is most useful in domains where information is commonly stored in unstructured textual form, such as incident reports, health records, newspaper articles, and SMS messages.
As with methods based on computer vision, in some cases a human can probably perform a task with greater accuracy than a trained machine-learning model can. Nonetheless, the speed of “good enough” automated systems can enable meaningful scale efficiencies—for example, providing automated answers to questions that citizens may ask through email. In other cases, especially those that require processing and analyzing vast amounts of information quickly, AI models could outperform humans. An illustrative example could include monitoring the outbreak of disease by analyzing tweets sent in multiple local languages.
Some capabilities, or combination of capabilities, can give the target population opportunities that would not otherwise exist, especially for use cases that involve understanding the natural environment through the interpretation of vision, sound, and speech. An example is the use of AI to help educate children who are on the autism spectrum. Although professional therapists have proved effective in creating behavioral-learning plans for children with autism spectrum disorder (ASD), waitlists for therapy are long. AI tools, primarily using emotion recognition and face detection, can increase access to such educational opportunities by providing cues to help children identify and ultimately learn facial expressions among their family members and friends.

Structured deep learning also may have social-benefit applications

A third category of AI capabilities with social-good applications is structured deep learning to analyze traditional tabular data sets. It can help solve problems ranging from tax fraud (using tax-return data) to finding otherwise hard to discover patterns of insights in electronic health records.
Structured deep learning (SDL) has been gaining momentum in the commercial sector in recent years. We expect to see that trend spill over into solutions for social-good use cases, particularly given the abundance of tabular data in the public and social sectors. By automating aspects of basic feature engineering, SDL solutions reduce the need either for domain expertise or an innate understanding of the data and which aspects of the data are important.

Advanced analytics can be a more time- and cost-effective solution than AI for some use cases

Some of the use cases in our library are better suited to traditional analytics techniques, which are easier to create, than to AI. Moreover, for certain tasks, other analytical techniques can be more suitable than deep learning. For example, in cases where there is a premium on explainability, decision tree-based models can often be more easily understood by humans. In Flint, Michigan, machine learning (sometimes referred to as AI, although for this research we defined AI more narrowly as deep learning) is being used to predict houses that may still have lead water pipes

Third: Overcoming bottlenecks, especially for data and talent

While the social impact of AI is potentially very large, certain bottlenecks must be overcome if even some of that potential is to be realized. In all, we identified 18 potential bottlenecks through interviews with social-domain experts and with AI researchers and practitioners. We grouped these bottlenecks in four categories of importance.
The most significant bottlenecks are data accessibility, a shortage of talent to develop AI solutions, and “last-mile” implementation challenges

Data needed for social-impact uses may not be easily accessible

Data accessibility remains a significant challenge. Resolving it will require a willingness, by both private- and public-sector organizations, to make data available. Much of the data essential or useful for social-good applications are in private hands or in public institutions that might not be willing to share their data. These data owners include telecommunications and satellite companies; social-media platforms; financial institutions (for details such as credit histories); hospitals, doctors, and other health providers (medical information); and governments (including tax information for private individuals). Social entrepreneurs and nongovernmental organizations (NGOs) may have difficulty accessing these data sets because of regulations on data use, privacy concerns, and bureaucratic inertia. The data may also have business value and could be commercially available for purchase. Given the challenges of distinguishing between social use and commercial use, the price may be too high for NGOs and others wanting to deploy the data for societal benefits.

The expert AI talent needed to develop and train AI models is in short supply

Just over half of the use cases in our library can leverage solutions created by people with less AI experience. The remaining use cases are more complex as a result of a combination of factors, which vary with the specific case. These need high-level AI expertise—people who may have PhDs or considerable experience with the technologies. Such people are in short supply.
For the first use cases requiring less AI expertise, the needed solution builders are data scientists or software developers with AI experience but not necessarily high-level expertise. Most of these use cases are less complex models that rely on single modes of data input.
The complexity of problems increases significantly when use cases require several AI capabilities to work together cohesively, as well as multiple different data-type inputs. Progress in developing solutions for these cases will thus require high-level talent, for which demand far outstrips supply and competition is fierce.

‘Last-mile’ implementation challenges are also a significant bottleneck for AI deployment for social good

Even when high-level AI expertise is not required, NGOs and other social-sector organizations can face technical problems, over time, deploying and sustaining AI models that require continued access to some level of AI-related skills. The talent required could range from engineers who can maintain or improve the models to data scientists who can extract meaningful output from them. Handoffs fail when providers of solutions implement them and then disappear without ensuring that a sustainable plan is in place.
Organizations may also have difficulty interpreting the results of an AI model. Even if a model achieves a desired level of accuracy on test data, new or unanticipated failure cases often appear in real-life scenarios. An understanding of how the solution works may require a data scientist or “translator.” Without one, the NGO or other implementing organization may trust the model’s results too much: most AI models cannot perform accurately all the time, and many are described as “brittle” (that is, they fail when their inputs stray in specific ways from the data sets on which they were trained).

Fourth: Risks to be managed

AI tools and techniques can be misused by authorities and others who have access to them, so principles for their use must be established. AI solutions can also unintentionally harm the very people they are supposed to help.
An analysis of our use-case library found that four main categories of risk are particularly relevant when AI solutions are leveraged for social good: bias and fairness, privacy, safe use and security, and “explainability” (the ability to identify the feature or data set that leads to a particular decision or prediction).
Bias in AI may perpetuate and aggravate existing prejudices and social inequalities, affecting already-vulnerable populations and amplifying existing cultural prejudices. Bias of this kind may come about through problematic historical data, including unrepresentative or inaccurate sample sizes. For example, AI-based risk scoring for criminal-justice purposes may be trained on historical criminal data that include biases (among other things, African Americans may be unfairly labeled as high risk). As a result, AI risk scores would perpetuate this bias. Some AI applications already show large disparities in accuracy depending on the data used to train algorithms; for example, examination of facial-analysis software shows an error rate of 0.8 percent for light-skinned men; for dark-skinned women, the error rate is 34.7 percent.
One key source of bias can be poor data quality—for example, when data on past employment records are used to identify future candidates. An AI-powered recruiting tool used by one tech company was abandoned recently after several years of trials. It appeared to show systematic bias against women, which resulted from patterns in training data from years of hiring history. To counteract such biases, skilled and diverse data-science teams should take into account potential issues in the training data or sample intelligently from them.

Breaching the privacy of personal information could cause harm

Privacy concerns concerning sensitive personal data are already rife for AI. The ability to assuage these concerns could help speed public acceptance of its widespread use by profit-making and nonprofit organizations alike. The risk is that financial, tax, health, and similar records could become accessible through porous AI systems to people without a legitimate need to access them. That would cause embarrassment and, potentially, harm.

Safe use and security are essential for societal good uses of AI

Ensuring that AI applications are used safely and responsibly is an essential prerequisite for their widespread deployment for societal aims. Seeking to further social good with dangerous technologies would contradict the core mission and could also spark a backlash, given the potentially large number of people involved. For technologies that could affect life and well-being, it will be important to have safety mechanisms in place, including compliance with existing laws and regulations. For example, if AI misdiagnoses patients in hospitals that do not have a safety mechanism in place—particularly if these systems are directly connected to treatment processes—the outcomes could be catastrophic. The framework for accountability and liability for harm done by AI is still evolving.

Decisions made by complex AI models will need to become more readily explainable

Explaining in human terms the results from large, complex AI models remains one of the key challenges to acceptance by users and regulatory authorities. Opening the AI “black box” to show how decisions are made, as well as which factors, features, and data sets are decisive and which are not, will be important for the social use of AI. That will be especially true for stakeholders such as NGOs, which will require a basic level of transparency and will probably want to give clear explanations of the decisions they make. Explainability is especially important for use cases relating to decision making about individuals and, in particular, for cases related to justice and criminal identification, since an accused person must be able to appeal a decision in a meaningful way.

Mitigating risks

Effective mitigation strategies typically involve “human in the loop” interventions: humans are involved in the decision or analysis loop to validate models and double-check results from AI solutions. Such interventions may call for cross-functional teams, including domain experts, engineers, product managers, user-experience researchers, legal professionals, and others, to flag and assess possible unintended consequences.
Human analysis of the data used to train models may be able to identify issues such as bias and lack of representation. Fairness and security “red teams” could carry out solution tests, and in some cases third parties could be brought in to test solutions by using an adversarial approach. To mitigate this kind of bias, university researchers have demonstrated methods such as sampling the data with an understanding of their inherent bias and creating synthetic data sets based on known statistics.
Guardrails to prevent users from blindly trusting AI can be put in place. In medicine, for example, misdiagnoses can be devastating to patients. The problems include false-positive results that cause distress; wrong or unnecessary treatments or surgeries; or, even worse, false negatives, so that patients do not get the correct diagnosis until a disease has reached the terminal stage.
Technology may find some solutions to these challenges, including explainability. For example, nascent approaches to the transparency of models include local-interpretable-model-agnostic (LIME) explanations, which attempt to identify those parts of input data a trained model relies on most to make predictions.

Fifth: Scaling up the use of AI for social good

As with any technology deployment for social good, the scaling up and successful application of AI will depend on the willingness of a large group of stakeholders—including collectors and generators of data, as well as governments and NGOs—to engage. These are still the early days of AI’s deployment for social good, and considerable progress will be needed before the vast potential becomes a reality. Public- and private-sector players all have a role to play.

Improving data accessibility for social-impact cases

A wide range of stakeholders owns, controls, collects, or generates the data that could be deployed for AI solutions. Governments are among the most significant collectors of information, which can include tax, health, and education data. Massive volumes of data are also collected by private companies—including satellite operators, telecommunications firms, utilities, and technology companies that run digital platforms, as well as social-media sites and search operations. These data sets may contain highly confidential personal information that cannot be shared without being anonymized. But private operators may also commercialize their data sets, which may therefore be unavailable for pro-bono social-good cases.
Overcoming this accessibility challenge will probably require a global call to action to record data and make it more readily available for well-defined societal initiatives.
Data collectors and generators will need to be encouraged—and possibly mandated—to open access to subsets of their data when that could be in the clear public interest. This is already starting to happen in some areas. For example, many satellite data companies participate in the International Charter on Space and Major Disasters, which commits them to open access to satellite data during emergencies, such as the September 2018 tsunami in Indonesia and Hurricane Michael, which hit the US East Coast in October 2018.
Close collaboration between NGOs and data collectors and generators could also help facilitate this push to make data more accessible. Funding will be required from governments and foundations for initiatives to record and store data that could be used for social ends.
Even if the data are accessible, using them presents challenges. Continued investment will be needed to support high-quality data labeling. And multiple stakeholders will have to commit themselves to store data so that they can be accessed in a coordinated way and to use the same data-recording standards where possible to ensure seamless interoperability.
Issues of data quality and of potential bias and fairness will also have to be addressed if the data are to be deployed usefully. Transparency will be a key for bias and fairness. A deep understanding of the data, their provenance, and their characteristics must be captured, so that others using the data set understand the potential flaws.
All this is likely to require collaboration among companies, governments, and NGOs to set up regular data forums, in each industry, to work on the availability and accessibility of data and on connectivity issues. Ideally, these stakeholders would set global industry standards and collaborate closely on use cases to ensure that implementation becomes feasible.

Overcoming AI talent shortages is essential for implementing AI-based solutions for social impact

The long-term solution to the talent challenges we have identified will be to recruit more students to major in computer science and specialize in AI. That could be spurred by significant increases in funding—both grants and scholarships—for tertiary education and for PhDs in AI-related fields. Given the high salaries AI expertise commands today, the market may react with a surge in demand for such an education, although the advanced math skills needed could discourage many people.
Sustaining or even increasing current educational opportunities would be helpful. These opportunities include “AI residencies”—one-year training programs at corporate research labs—and shorter-term AI “boot camps” and academies for midcareer professionals. An advanced degree typically is not required for these programs, which can train participants in the practice of AI research without requiring them to spend years in a PhD program.
Given the shortage of experienced AI professionals in the social sector, companies with AI talent could play a major role in focusing more effort on AI solutions that have a social impact. For example, they could encourage employees to volunteer and support or coach noncommercial organizations that want to adopt, deploy, and sustain high-impact AI solutions. Companies and universities with AI talent could also allocate some of their research capacity to new social-benefit AI capabilities or solutions that cannot otherwise attract people with the requisite skills.
Overcoming the shortage of talent that can manage AI implementations will probably require governments and educational providers to work with companies and social-sector organizations to develop more free or low-cost online training courses. Foundations could provide funding for such initiatives.
Task forces of tech and business translators from governments, corporations, and social organizations, as well as freelancers, could be established to help teach NGOs about AI through relatable case studies. Beyond coaching, these task forces could help NGOs scope potential projects, support deployment, and plan sustainable road maps.
From the modest library of use cases that we have begun to compile, we can already see tremendous potential for using AI to address the world’s most important challenges. While that potential is impressive, turning it into reality on the scale it deserves will require focus, collaboration, goodwill, funding, and a determination among many stakeholders to work for the benefit of society. We are only just setting out on this journey. Reaching the destination will be a step-by-step process of confronting barriers and obstacles. We can see the moon, but getting there will require more work and a solid conviction that the goal is worth all the effort—for the sake of everyone.

About the author(s)

Michael Chui is a partner and James Manyika is chairman and a director of the McKinsey Global Institute. Martin Harrysson and Roger Roberts are partners in McKinsey’s Silicon Valley office, where Rita Chung is a consultant. Pieter Nel is a specialist in the New York office; Ashley van Heteren is an expert associate principal in the Amsterdam office.
The Future of Work is here… what are you doing about it?

The Future of Work is here… what are you doing about it?

#futureofwork #digitaltransformation #shiftmindset #leadership

Retraining and reskilling workers in the age of automation

The world of work faces an epochal transition. By 2030, according to the a recent McKinsey Global Institute report, as many as 375 million workers—or roughly 14 percent of the global workforce—may need to switch occupational categories as digitization, automation, and advances in artificial intelligence disrupt the world of work. The kinds of skills companies require will shift, with profound implications for the career paths individuals will need to pursue.
How big is that challenge?
In terms of magnitude, it’s akin to coping with the large-scale shift from agricultural work to manufacturing that occurred in the early 20th century in North America and Europe, and more recently in China. But in terms of who must find new jobs, we are moving into uncharted territory. Those earlier workforce transformations took place over many decades, allowing older workers to retire and new entrants to the workforce to transition to the growing industries. But the speed of change today is potentially faster. The task confronting every economy, particularly advanced economies, will likely be to retrain and redeploy tens of millions of mid-career, middle-age workers. As the MGI report notes, “there are few precedents in which societies have successfully retrained such large numbers of people.”
So far, growing awareness of the scale of the task ahead has yet to translate into action. Indeed, public spending on labor-force training and support has fallen steadily for years in most member countries of the Organisation for Economic Co-Operation and Development (OECD). Nor do corporate-training budgets appear to be on any kind of upswing.
But that may be about to change.
Among companies on the front lines, according to a recent McKinsey survey, executives increasingly see investing in retraining and “upskilling” existing workers as an urgent business priority—and they also believe that this is an issue where corporations, not governments, must take the lead. Our survey, which was in the field in late 2017, polled more than 1,500 respondents from business, the public sector, and not for profits across regions, industries, and sectors. The analysis that follows focuses on answers from roughly 300 executives at companies with more than $100 million in annual revenues.
Among this group, 66 percent see “addressing potential skills gaps related to automation/digitization” within their workforce as at least a “top-ten priority.” Nearly 30 percent put it in the top five. The driver behind this sense of urgency is the accelerating pace of enterprise-wide transformation. Looking back over the past five years, only about a third of executives in our survey said technological change had caused them to retrain or replace more than a quarter of their employees.
But when they look out over the next five years, that narrative changes.
Sixty-two percent of executives believe they will need to retrain or replace more than a quarter of their workforce between now and 2023 due to advancing automation and digitization. The threat looms larger in the United States and Europe (64 percent and 70 percent respectively) than in the rest of the world (only 55 percent)—and it is felt especially acutely among the biggest companies. Seventy percent of executives at companies with more than $500 million in annual revenues see technological disruption over the next five years affecting more than a quarter of their workers.
Appropriately, this keen sense of the challenge ahead comes with a strong feeling of ownership. While they clearly do not expect to solve this alone—forging creative partnerships with a wide range of relevant players, for example, will be critical—by a nearly a 5:1 margin, the executives in our latest survey believe that corporations, not governments, educators, or individual workers, should take the lead in trying to close the looming skills gap. That’s the view of 64 percent of the private-sector executives in the United States who see this as a top-ten priority issue, and 59 percent in Europe
As for solutions, 82 percent of executives at companies with more than $100 million in annual revenues believe retraining and reskilling must be at least half of the answer to addressing their skills gap. Within that consensus, though, were clear regional differences. Fully 94 percent of those surveyed in Europe insisted the answer would either be an equal mix of hiring and retraining or mainly retraining versus a strong but less resounding 62 percent in this camp in the United States. By contrast, 35 percent of Americans thought the challenge would have to be met mainly or exclusively by hiring new talent, compared to just 7 percent in this camp in Europe
Now the bad news: only 16 percent of private-sector business leaders in this group feel “very prepared” to address potential skills gaps, with roughly twice as many feeling either “somewhat unprepared” or “very unprepared.” The majority felt “somewhat prepared”—hardly a clarion call of confidence.
What are the main barriers? About one-third of executives feel an urgent need to rethink and upgrade their current HR infrastructure. Many companies are also struggling to figure out how job roles will change and what kind of talent they will require over the next five to ten years. Some executives who saw this as a top priority—42 percent in the United States, 24 percent in Europe, and 31 percent in the rest of the world—admit they currently lack a “good understanding of how automation and/or digitization will affect our future skills needs.”
Such a high degree of anxiety is understandable. In our experience, too much traditional training and retraining goes off the rails because it delivers no clear pathway to new work, relies too heavily on theory versus practice, and fails to show a return on investment. Generation, a global youth employment not for profit founded in 2015 by McKinsey, deliberately set out to address those shortcomings. Operating in five countries across over 20 professions, Generation operates programs that focus on targeting training to where strong demand for jobs exists and gathers the data needed to prove the return on investment (ROI) to learners and employers. As a result, Generation’s more than 16,000 graduates have over 82 percent job placement, 72 percent job retention at one year, and two to six times higher income than prior to the program. Generation will soon pilot a new initiative, Re-Generation, to apply this same formula—which includes robust partnerships with employers, governments and not for profits—to helping mid-career employees learn new skills for new jobs.
For many companies, cracking the code on reskilling is partly about retaining their “license to operate” by empowering employees to be more productive. Thirty-eight percent of executives in our survey, across all regions, cited the desire to “align with our organization’s mission and values” as a key reason for taking action. In a similar vein, at last winter’s World Economic Forum in Davos, 80 percent of CEOs who were investing heavily in artificial intelligence also publicly pledged to retain and retrain existing employees.
But the biggest driver is this: as digitization, automation, and AI reshape whole industries and every enterprise, the only way to realize the potential productivity dividends from that investment will be to have the people and processes in place to capture it. Managing this transition well, in short, is not just a social good; it’s a competitive imperative. That’s why a resounding majority of respondents—64 percent across Europe, the United States, and the rest of the world—said the main reason they were willing to invest in retraining was “to increase employee productivity.”
We hear that thought echoed in a growing number of C-suite conversations we are having these days. At the moment, most top executives have far more questions than answers about what it will take to meet the reskilling challenge at the kind of scale the next decade will likely demand. They ask: How can I map the future against my current talent pool and processes? What part of future employment demand can I meet by retraining existing workers, and what is the ROI of doing so, versus simply hiring new ones? How best can I tap into what are, for me, nontraditional talent pools? What partners, either in the private, public, or nongovernmental-organization (NGO) sectors, might help me succeed—and what are our respective roles?
Good questions all.
Success will require first developing a granular map of how technology will change the skill requirements within your company. Once this is understood, the next step will be deciding whether to tap into new models of online and offline learning and training or partner with traditional educational providers. (Over time, a more fundamental rethinking of 100-year-old educational models will also be needed.) Policy makers will need to consider new forms of unemployment income and worker transition support, and foster more intensive and innovative collaboration between the public and private sectors. Individuals will need to step up too, as will governments. Depending on the speed and scale of the coming workforce transition, as MGI noted in its recent report, many countries may conclude they will need to undertake “initiatives on the scale of the Marshall plan.”
But for now, we simply take comfort from the clear message of our latest survey: among large companies, senior executives see an urgent need to rethink and retool their role in helping workers develop the right skills for a rapidly changing economy—and their will to meet this challenge is strong. That’s not a bad place to start.

About the author(s)

Pablo Illanes is a partner in McKinsey’s Stamford office, Susan Lund is a partner of the McKinsey Global Institute, Mona Mourshed and Scott Rutherford are senior partners in the Washington, DC, office, and Magnus Tyreman is a senior partner in the Stockholm office.
Teaching Robots Right from Wrong?

Teaching Robots Right from Wrong?

Dec 7, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Vyacheslav Polonski and Jane Zavalishina

Curated by Helena M. Herrero Lamuedra

Today, it is difficult to imagine a technology that is as enthralling and terrifying as machine learning. While media coverage and research papers consistently tout the potential of machine learning to become the biggest driver of positive change in business and society, the lingering question on everyone’s mind is: “Well, what if it all goes terribly wrong?”

For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. Alarmist views on the terrifying potential of general AI abound in the media.

More often than not, these dystopian prophecies have been met with calls for a more ethical implementation of AI systems; that somehow engineers should imbue autonomous systems with a sense of ethics. According to some AI experts, we can teach our future robot overlords to tell right from wrong, akin to a “good Samaritan AI” that will always act justly on its own and help humans in distress.

Although this future is still decades away, today there is much uncertainty as to how, if at all, we will reach this level of general machine intelligence. But what is more crucial, at the moment, is that even the narrow AI applications that exist today require our urgent attention in the ways in which they are making moral decisions in practical day-to-day situations. For example, this is relevant when algorithms make decisions about who gets access to loans or when self-driving cars have to calculate the value of a human life in hazardous situations.

Teaching morality to machines is hard because humans can’t objectively convey morality in measurable metrics that make it easy for a computer to process. In fact, it is even questionable whether we, as humans have a sound understanding of morality at all that we can all agree on. In moral dilemmas, humans tend to rely on gut feeling instead of elaborate cost-benefit calculations. Machines, on the other hand, need explicit and objective metrics that can be clearly measured and optimized. For example, an AI player can excel in games with clear rules and boundaries by learning how to optimize the score through repeated playthroughs.

After its experiments with deep reinforcement learning on Atari video games, Alphabet’s DeepMind was able to beat the best human players of Go. Meanwhile, OpenAI amassed “lifetimes” of experiences to beat the best human players at the Valve Dota 2 tournament, one of the most popular e-sports competitions globally.

But in real-life situations, optimization problems are vastly more complex. For example, how do you teach a machine to algorithmically maximize fairness or to overcome racial and gender biases in its training data? A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.

This has led some authors to worry that a naive application of algorithms to everyday problems could amplify structural discrimination and reproduce biases in the data they are based on. In the worst case, algorithms could deny services to minorities, impede people’s employment opportunities or get the wrong political candidate elected.

Based on our experiences in machine learning, we believe there are three ways to begin designing more ethically aligned machines:

1. Define ethical behavior

AI researchers and ethicists need to formulate ethical values as quantifiable parameters. In other words, they need to provide machines with explicit answers and decision rules to any potential ethical dilemmas it might encounter. This would require that humans agree among themselves on the most ethical course of action in any given situation – a challenging but not impossible task. For example, Germany’s Ethics Commission on Automated and Connected Driving has recommended to specifically programme ethical values into self-driving cars to prioritize the protection of human life above all else. In the event of an unavoidable accident, the car should be “prohibited to offset victims against one another”. In other words, a car shouldn’t be able to choose whether to kill one person based on individual features, such as age, gender or physical/mental constitution when a crash is inescapable.

2. Crowdsource our morality

Engineers need to collect enough data on explicit ethical measures to appropriately train AI algorithms. Even after we have defined specific metrics for our ethical values, an AI system might still struggle to pick it up if there is not enough unbiased data to train the models. Getting appropriate data is challenging, because ethical norms cannot be always clearly standardized. Different situations require different ethical approaches, and in some situations there may not be a single ethical course of action at all – just think about lethal autonomous weapons that are currently being developed for military applications. One way of solving this would be to crowdsource potential solutions to moral dilemmas from millions of humans. For instance, MIT’s Moral Machine project shows how crowdsourced data can be used to effectively train machines to make better moral decisions in the context of self-driving cars.

3. Make AI transparent

Policymakers need to implement guidelines that make AI decisions with respect to ethics more transparent, especially with regard to ethical metrics and outcomes. If AI systems make mistakes or have undesired consequences, we cannot accept “the algorithm did it” as an adequate excuse. But we also know that demanding full algorithmic transparency is technically untenable (and, quite frankly, not very useful). Neural networks are simply too complex to be scrutinized by human inspectors. Instead, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. For self-driving cars, for instance, this could imply that detailed logs of all automated decisions are kept at all times to ensure their ethical accountability.

We believe that these three recommendations should be seen as a starting point for developing ethically aligned AI systems. Failing to imbue ethics into AI systems, we may be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. For example, in an unavoidable accident situation, self-driving cars will need to make some decision for better or worse. But if the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm. This means that we cannot simply refuse to quantify our values. By walking away from this critical ethical discussion, we are making an implicit moral choice. And as machine intelligence becomes increasingly pervasive in society, the price of inaction could be enormous – it could negatively affect the lives of billions of people.

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimized. For AI engineers, this may seem like a daunting task. After all, defining moral values is a challenge mankind has struggled with throughout its history. Nevertheless, the state of AI research requires us to finally define morality and to quantify it in explicit terms. Engineers cannot build a “good samaritan AI”, as long as they lack a formula for the good samaritan human.

Why Tomorrow’s Customers Won’t Shop at Today’s Retailers

Why Tomorrow’s Customers Won’t Shop at Today’s Retailers

Oct 18, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Dan Clay

Curated by Helena M. Herrero Lamuedra

Meet Dawn. Her T-shirt is connected to the internet, and her tattoo unlocks her car door. She’s never gone shopping, but she gets a package on her doorstep every week. She’s never been lost or late, and she’s never once waited in line. She never goes anywhere without visiting in VR first, and she doesn’t buy anything that wasn’t made just for her.

Dawn is an average 25-year-old in the not-so-distant future. She craves mobility, flexibility, and uniqueness; she spends more on experience than she does on products; she demands speed, transparency, and control; and she has enough choice to avoid any company that doesn’t give her what she wants. We’re in the midst of remarkable change not seen since the Industrial Revolution, and a noticeable gap is growing between what Dawn wants and what traditional retailers provide.

In 2005 Amazon launched free two-day shipping. In 2014 it launched free two-hour shipping. It’s hard to get faster than “Now,” and once immediacy becomes table stakes, competition will move to prediction. By intelligently applying data from our connected devices, smart digital assistants will be able to deliver products before we even acknowledge the need: Imagine a pharmacy that knows you’re about to get sick; an electronics retailer that knows you forgot your charger; an online merchant that knows you’re out of toilet paper; and a subscription service that knows you have a wedding coming up, have a little extra in your bank account, and that you look good in blue. Near-perfect predictions are the future of retail, and it’s up to CX and UX designers to ensure that they are greeted as miraculous time-savers rather than creepy intrusions.

Every product is personalized

While consumers are increasingly wary about how much of their personal data is being tracked, they’re also increasingly willing to trade privacy for more tangible benefits. It then falls on companies to ensure those benefits justify the exchange. In the retail space this increasingly means perfectly tailored products and a more personally relevant experience. Etsy recently acquired an AI startup to make its search experience more relevant and tailored. HelloAva provides customers with personalized skincare product recommendations based on machine learning combined with a few texts and a selfie. Amazon, constantly at the forefront of customer needs, recently acquired a patent for a custom clothing manufacturing system.

Market to the machines

Dawn, our customer of the future, won’t need to customize all of her purchases; for many of her needs, she’ll give her intelligent, IoT-enabled agent (think Alexa with a master’s degree) personalized filters so the agent can buy on her behalf. When Siri is choosing which shoes to rent, the robot almost becomes the customer, and retailers must win over smart AI assistants before they even reach end customers. Netflix already has a team of people working on this new realm of marketing to machines. As CEO Reed Hastings quipped at this year’s Mobile World Congress, “I’m not sure if in 20 to 50 years we are going to be entertaining you, or entertaining AIs.”

Branded, immersive experiences matter more than ever

As online shopping and automation increase, physical retail spaces will have to deliver much more than just a good shopping experience to compel people to visit. This could be through added education (like the expert stylists at Nordstrom’s store without any merchandise) or heightened service personalization (like Asics on-site 3D foot mapping and gait cycle analysis) or constantly evolving entertainment (like Gentle Monster’s Seoul flagship store’s monthly changing “exhibition“).

In this context, brand is becoming more than a value proposition or signifier—it’s the essential ingredient preventing companies from becoming commoditized by an on-demand, automated world where your car picks its own motor oil. Brands have a vital responsibility to create a community for customers to belong to and believe in.

A mobile world that feels like a single channel experience

Dawn will be increasingly mobile, and she’ll expect retailers to move along with her. She may research dresses on her phone and expect the store associate to know what she’s looked at. It’s no secret that mobile shopping is continuing to grow, but retailers need to think less about developing separate strategies for their channels and more about maintaining a continuous flow with the one channel that matters: the customer channel.

WeChat, for example, China’s largest social media channel, is used for everything from online shopping and paying at supermarkets to ordering a taxi and getting flight updates, creating a seamless “single channel” experience across all interactions. Snapchat’s new Context Cards, allowing users to read location-based reviews, business information and hail rides all within the app, builds towards a similar, single channel experience.

The future promises profound change. Yet perhaps the most pressing challenge for retailers is keeping up with customers’ expectations for immediacy, personalization, innovative experiences, and the other myriad ways technological and societal changes are making Dawn the most demanding customer the retail industry has ever seen. The future is daunting, but it’s also full of opportunity, and the retailers that can anticipate the needs of the customer of the future are well-poised for success in the years to come.

The Fourth Industrial Revolution and why it’s relevant

The Fourth Industrial Revolution and why it’s relevant

Sep 25, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Klaus Schwab

Curated by Helena M. Herrero Lamuedra

We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before. We do not yet know just how it will unfold, but one thing is clear: the response to it must be integrated and comprehensive, involving all stakeholders of the global polity, from the public and private sectors to academia and civil society.

The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.

There are three reasons why today’s transformations represent not merely a prolongation of the Third Industrial Revolution but rather the arrival of a Fourth and distinct one: velocity, scope, and systems impact. The speed of current breakthroughs has no historical precedent. When compared with previous industrial revolutions, the Fourth is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation of entire systems of production, management, and governance.

The possibilities of billions of people connected by mobile devices, with unprecedented processing power, storage capacity, and access to knowledge, are unlimited. And these possibilities will be multiplied by emerging technology breakthroughs in fields such as artificial intelligence, robotics, the Internet of Things, autonomous vehicles, 3-D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing.

Already, artificial intelligence is all around us, from self-driving cars and drones to virtual assistants and software that translate or invest. Impressive progress has been made in AI in recent years, driven by exponential increases in computing power and by the availability of vast amounts of data, from software used to discover new drugs to algorithms used to predict our cultural interests. Digital fabrication technologies, meanwhile, are interacting with the biological world on a daily basis. Engineers, designers, and architects are combining computational design, additive manufacturing, materials engineering, and synthetic biology to pioneer a symbiosis between microorganisms, our bodies, the products we consume, and even the buildings we inhabit.

Challenges and opportunities

Like the revolutions that preceded it, the Fourth Industrial Revolution has the potential to raise global income levels and improve the quality of life for populations around the world. To date, those who have gained the most from it have been consumers able to afford and access the digital world; technology has made possible new products and services that increase the efficiency and pleasure of our personal lives. Ordering a cab, booking a flight, buying a product, making a payment, listening to music, watching a film, or playing a game—any of these can now be done remotely.

In the future, technological innovation will also lead to a supply-side miracle, with long-term gains in efficiency and productivity. Transportation and communication costs will drop, logistics and global supply chains will become more effective, and the cost of trade will diminish, all of which will open new markets and drive economic growth.

At the same time, as the economists Erik Brynjolfsson and Andrew McAfee have pointed out, the revolution could yield greater inequality, particularly in its potential to disrupt labor markets. As automation substitutes for labor across the entire economy, the net displacement of workers by machines might exacerbate the gap between returns to capital and returns to labor. On the other hand, it is also possible that the displacement of workers by technology will, in aggregate, result in a net increase in safe and rewarding jobs.

We cannot foresee at this point which scenario is likely to emerge, and history suggests that the outcome is likely to be some combination of the two. However, I am convinced of one thing—that in the future, talent, more than capital, will represent the critical factor of production. This will give rise to a job market increasingly segregated into “low-skill/low-pay” and “high-skill/high-pay” segments, which in turn will lead to an increase in social tensions.

In addition to being a key economic concern, inequality represents the greatest societal concern associated with the Fourth Industrial Revolution. The largest beneficiaries of innovation tend to be the providers of intellectual and physical capital—the innovators, shareholders, and investors—which explains the rising gap in wealth between those dependent on capital versus labor. Technology is therefore one of the main reasons why incomes have stagnated, or even decreased, for a majority of the population in high-income countries: the demand for highly skilled workers has increased while the demand for workers with less education and lower skills has decreased. The result is a job market with a strong demand at the high and low ends, but a hollowing out of the middle.

This helps explain why so many workers are disillusioned and fearful that their own real incomes and those of their children will continue to stagnate. It also helps explain why middle classes around the world are increasingly experiencing a pervasive sense of dissatisfaction and unfairness. A winner-takes-all economy that offers only limited access to the middle class is a recipe for democratic malaise and dereliction.

Discontent can also be fueled by the pervasiveness of digital technologies and the dynamics of information sharing typified by social media. More than 30 percent of the global population now uses social media platforms to connect, learn, and share information. In an ideal world, these interactions would provide an opportunity for cross-cultural understanding and cohesion. However, they can also create and propagate unrealistic expectations as to what constitutes success for an individual or a group, as well as offer opportunities for extreme ideas and ideologies to spread.

The impact on business

An underlying theme in my conversations with global CEOs and senior business executives is that the acceleration of innovation and the velocity of disruption are hard to comprehend or anticipate and that these drivers constitute a source of constant surprise, even for the best connected and most well informed. Indeed, across all industries, there is clear evidence that the technologies that underpin the Fourth Industrial Revolution are having a major impact on businesses.

On the supply side, many industries are seeing the introduction of new technologies that create entirely new ways of serving existing needs and significantly disrupt existing industry value chains. Disruption is also flowing from agile, innovative competitors who, thanks to access to global digital platforms for research, development, marketing, sales, and distribution, can oust well-established incumbents faster than ever by improving the quality, speed, or price at which value is delivered.

Major shifts on the demand side are also occurring, as growing transparency, consumer engagement, and new patterns of consumer behavior (increasingly built upon access to mobile networks and data) force companies to adapt the way they design, market, and deliver products and services.

A key trend is the development of technology-enabled platforms that combine both demand and supply to disrupt existing industry structures, such as those we see within the “sharing” or “on demand” economy. These technology platforms, rendered easy to use by the smartphone, convene people, assets, and data—thus creating entirely new ways of consuming goods and services in the process. In addition, they lower the barriers for businesses and individuals to create wealth, altering the personal and professional environments of workers. These new platform businesses are rapidly multiplying into many new services, ranging from laundry to shopping, from chores to parking, from massages to travel.

On the whole, there are four main effects that the Fourth Industrial Revolution has on business—on customer expectations, on product enhancement, on collaborative innovation, and on organizational forms. Whether consumers or businesses, customers are increasingly at the epicenter of the economy, which is all about improving how customers are served. Physical products and services, moreover, can now be enhanced with digital capabilities that increase their value. New technologies make assets more durable and resilient, while data and analytics are transforming how they are maintained. A world of customer experiences, data-based services, and asset performance through analytics, meanwhile, requires new forms of collaboration, particularly given the speed at which innovation and disruption are taking place. And the emergence of global platforms and other new business models, finally, means that talent, culture, and organizational forms will have to be rethought.

Overall, the inexorable shift from simple digitization (the Third Industrial Revolution) to innovation based on combinations of technologies (the Fourth Industrial Revolution) is forcing companies to reexamine the way they do business. The bottom line, however, is the same: business leaders and senior executives need to understand their changing environment, challenge the assumptions of their operating teams, and relentlessly and continuously innovate.

The impact on government

As the physical, digital, and biological worlds continue to converge, new technologies and platforms will increasingly enable citizens to engage with governments, voice their opinions, coordinate their efforts, and even circumvent the supervision of public authorities. Simultaneously, governments will gain new technological powers to increase their control over populations, based on pervasive surveillance systems and the ability to control digital infrastructure. On the whole, however, governments will increasingly face pressure to change their current approach to public engagement and policymaking, as their central role of conducting policy diminishes owing to new sources of competition and the redistribution and decentralization of power that new technologies make possible.

Ultimately, the ability of government systems and public authorities to adapt will determine their survival. If they prove capable of embracing a world of disruptive change, subjecting their structures to the levels of transparency and efficiency that will enable them to maintain their competitive edge, they will endure. If they cannot evolve, they will face increasing trouble.

This will be particularly true in the realm of regulation. Current systems of public policy and decision-making evolved alongside the Second Industrial Revolution, when decision-makers had time to study a specific issue and develop the necessary response or appropriate regulatory framework. The whole process was designed to be linear and mechanistic, following a strict “top down” approach.

But such an approach is no longer feasible. Given the Fourth Industrial Revolution’s rapid pace of change and broad impacts, legislators and regulators are being challenged to an unprecedented degree and for the most part are proving unable to cope.

How, then, can they preserve the interest of the consumers and the public at large while continuing to support innovation and technological development? By embracing “agile” governance, just as the private sector has increasingly adopted agile responses to software development and business operations more generally. This means regulators must continuously adapt to a new, fast-changing environment, reinventing themselves so they can truly understand what it is they are regulating. To do so, governments and regulatory agencies will need to collaborate closely with business and civil society.

The Fourth Industrial Revolution will also profoundly impact the nature of national and international security, affecting both the probability and the nature of conflict. The history of warfare and international security is the history of technological innovation, and today is no exception. Modern conflicts involving states are increasingly “hybrid” in nature, combining traditional battlefield techniques with elements previously associated with non-state actors. The distinction between war and peace, combatant and noncombatant, and even violence and nonviolence (think cyberwarfare) is becoming uncomfortably blurry.

As this process takes place and new technologies such as autonomous or biological weapons become easier to use, individuals and small groups will increasingly join states in being capable of causing mass harm. This new vulnerability will lead to new fears. But at the same time, advances in technology will create the potential to reduce the scale or impact of violence, through the development of new modes of protection, for example, or greater precision in targeting.

The impact on people

The Fourth Industrial Revolution, finally, will change not only what we do but also who we are. It will affect our identity and all the issues associated with it: our sense of privacy, our notions of ownership, our consumption patterns, the time we devote to work and leisure, and how we develop our careers, cultivate our skills, meet people, and nurture relationships. It is already changing our health and leading to a “quantified” self, and sooner than we think it may lead to human augmentation. The list is endless because it is bound only by our imagination.

I am a great enthusiast and early adopter of technology, but sometimes I wonder whether the inexorable integration of technology in our lives could diminish some of our quintessential human capacities, such as compassion and cooperation. Our relationship with our smartphones is a case in point. Constant connection may deprive us of one of life’s most important assets: the time to pause, reflect, and engage in meaningful conversation.

One of the greatest individual challenges posed by new information technologies is privacy. We instinctively understand why it is so essential, yet the tracking and sharing of information about us is a crucial part of the new connectivity. Debates about fundamental issues such as the impact on our inner lives of the loss of control over our data will only intensify in the years ahead. Similarly, the revolutions occurring in biotechnology and AI, which are redefining what it means to be human by pushing back the current thresholds of life span, health, cognition, and capabilities, will compel us to redefine our moral and ethical boundaries.

Shaping the future

Neither technology nor the disruption that comes with it is an exogenous force over which humans have no control. All of us are responsible for guiding its evolution, in the decisions we make on a daily basis as citizens, consumers, and investors. We should thus grasp the opportunity and power we have to shape the Fourth Industrial Revolution and direct it toward a future that reflects our common objectives and values.

To do this, however, we must develop a comprehensive and globally shared view of how technology is affecting our lives and reshaping our economic, social, cultural, and human environments. There has never been a time of greater promise, or one of greater potential peril. Today’s decision-makers, however, are too often trapped in traditional, linear thinking, or too absorbed by the multiple crises demanding their attention, to think strategically about the forces of disruption and innovation shaping our future.

In the end, it all comes down to people and values. We need to shape a future that works for all of us by putting people first and empowering them. In its most pessimistic, dehumanized form, the Fourth Industrial Revolution may indeed have the potential to “robotize” humanity and thus to deprive us of our heart and soul. But as a complement to the best parts of human nature—creativity, empathy, stewardship—it can also lift humanity into a new collective and moral consciousness based on a shared sense of destiny. It is incumbent on us all to make sure the latter prevails

Digital Transformation: Strategy Push or Technology Pull?

Digital Transformation: Strategy Push or Technology Pull?

Aug 24, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Niall McKeown

Curated by Helena M. Herrero Lamuedra

“If we understand what the technology is capable of, we will be in a better place to tell you how our organisation can leverage it” – says one business leader.

“This is what we want the business to achieve and how we’re going to get there.  Go find technology that helps make this happen” – says another.

So which comes first? Do we start with understanding what technology is capable of and devising a strategy to leverage it? Or do we define our strategy and then use technology to deliver it? Should leadership strategy push the business or should the rapid adoption of new technologies pull the business? Perhaps it’s a hybrid of strategy push and technology pull?

CIO.co.uk (an online magazine for Chief Innovation Officers) suggests that IT supports the business strategy. They argue that organisations should have an agile IT function capable of exploiting new technologies that facilitate delivery of the organisation’s strategic vision.

Harvard Business Review, as far back as 1980 have been suggesting that strategy pushes the business and technology is required as a support function.  MIT Sloan doesn’t sit on the fence either.  They suggest that Strategy – not Technology Drives Digital Transformation.

Times, however, are changing.  Most of these thought leadership articles were written pre-artificial Intelligence.  The explosion of new technologies and its rapid adoption by industry and consumers is creating massive opportunities for businesses that are technically informed, agile, opportunistic and innovative. Few modern businesses can claim to be all four unless their leadership has at least studied formal frameworks for digital transformation and upgraded their leadership thinking in new data driven decision making strategy planning and leadership techniques.

My own experience would suggest that the most advantaged leaders create strategy influenced by what is possible. They leverage new technologies as well as the assets that have always delivered competitive advantage to their business. They don’t abandon what makes them great, they augment it, enhance it, upgrade it. Transformation, however, is where they aim for step change not marginal gain. If I were to put a number on it, the most successfully transformed businesses are 80% strategy-pushed and 20% opportunistically technology-pulled.