#futureofwork #digitaltransformation #shiftmindset #leadership
Retraining and reskilling workers in the age of automation
Sep 25, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.
By Klaus Schwab
Curated by Helena M. Herrero Lamuedra
We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before. We do not yet know just how it will unfold, but one thing is clear: the response to it must be integrated and comprehensive, involving all stakeholders of the global polity, from the public and private sectors to academia and civil society.
The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.
There are three reasons why today’s transformations represent not merely a prolongation of the Third Industrial Revolution but rather the arrival of a Fourth and distinct one: velocity, scope, and systems impact. The speed of current breakthroughs has no historical precedent. When compared with previous industrial revolutions, the Fourth is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation of entire systems of production, management, and governance.
The possibilities of billions of people connected by mobile devices, with unprecedented processing power, storage capacity, and access to knowledge, are unlimited. And these possibilities will be multiplied by emerging technology breakthroughs in fields such as artificial intelligence, robotics, the Internet of Things, autonomous vehicles, 3-D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing.
Already, artificial intelligence is all around us, from self-driving cars and drones to virtual assistants and software that translate or invest. Impressive progress has been made in AI in recent years, driven by exponential increases in computing power and by the availability of vast amounts of data, from software used to discover new drugs to algorithms used to predict our cultural interests. Digital fabrication technologies, meanwhile, are interacting with the biological world on a daily basis. Engineers, designers, and architects are combining computational design, additive manufacturing, materials engineering, and synthetic biology to pioneer a symbiosis between microorganisms, our bodies, the products we consume, and even the buildings we inhabit.
Challenges and opportunities
Like the revolutions that preceded it, the Fourth Industrial Revolution has the potential to raise global income levels and improve the quality of life for populations around the world. To date, those who have gained the most from it have been consumers able to afford and access the digital world; technology has made possible new products and services that increase the efficiency and pleasure of our personal lives. Ordering a cab, booking a flight, buying a product, making a payment, listening to music, watching a film, or playing a game—any of these can now be done remotely.
In the future, technological innovation will also lead to a supply-side miracle, with long-term gains in efficiency and productivity. Transportation and communication costs will drop, logistics and global supply chains will become more effective, and the cost of trade will diminish, all of which will open new markets and drive economic growth.
At the same time, as the economists Erik Brynjolfsson and Andrew McAfee have pointed out, the revolution could yield greater inequality, particularly in its potential to disrupt labor markets. As automation substitutes for labor across the entire economy, the net displacement of workers by machines might exacerbate the gap between returns to capital and returns to labor. On the other hand, it is also possible that the displacement of workers by technology will, in aggregate, result in a net increase in safe and rewarding jobs.
We cannot foresee at this point which scenario is likely to emerge, and history suggests that the outcome is likely to be some combination of the two. However, I am convinced of one thing—that in the future, talent, more than capital, will represent the critical factor of production. This will give rise to a job market increasingly segregated into “low-skill/low-pay” and “high-skill/high-pay” segments, which in turn will lead to an increase in social tensions.
In addition to being a key economic concern, inequality represents the greatest societal concern associated with the Fourth Industrial Revolution. The largest beneficiaries of innovation tend to be the providers of intellectual and physical capital—the innovators, shareholders, and investors—which explains the rising gap in wealth between those dependent on capital versus labor. Technology is therefore one of the main reasons why incomes have stagnated, or even decreased, for a majority of the population in high-income countries: the demand for highly skilled workers has increased while the demand for workers with less education and lower skills has decreased. The result is a job market with a strong demand at the high and low ends, but a hollowing out of the middle.
This helps explain why so many workers are disillusioned and fearful that their own real incomes and those of their children will continue to stagnate. It also helps explain why middle classes around the world are increasingly experiencing a pervasive sense of dissatisfaction and unfairness. A winner-takes-all economy that offers only limited access to the middle class is a recipe for democratic malaise and dereliction.
Discontent can also be fueled by the pervasiveness of digital technologies and the dynamics of information sharing typified by social media. More than 30 percent of the global population now uses social media platforms to connect, learn, and share information. In an ideal world, these interactions would provide an opportunity for cross-cultural understanding and cohesion. However, they can also create and propagate unrealistic expectations as to what constitutes success for an individual or a group, as well as offer opportunities for extreme ideas and ideologies to spread.
The impact on business
An underlying theme in my conversations with global CEOs and senior business executives is that the acceleration of innovation and the velocity of disruption are hard to comprehend or anticipate and that these drivers constitute a source of constant surprise, even for the best connected and most well informed. Indeed, across all industries, there is clear evidence that the technologies that underpin the Fourth Industrial Revolution are having a major impact on businesses.
On the supply side, many industries are seeing the introduction of new technologies that create entirely new ways of serving existing needs and significantly disrupt existing industry value chains. Disruption is also flowing from agile, innovative competitors who, thanks to access to global digital platforms for research, development, marketing, sales, and distribution, can oust well-established incumbents faster than ever by improving the quality, speed, or price at which value is delivered.
Major shifts on the demand side are also occurring, as growing transparency, consumer engagement, and new patterns of consumer behavior (increasingly built upon access to mobile networks and data) force companies to adapt the way they design, market, and deliver products and services.
A key trend is the development of technology-enabled platforms that combine both demand and supply to disrupt existing industry structures, such as those we see within the “sharing” or “on demand” economy. These technology platforms, rendered easy to use by the smartphone, convene people, assets, and data—thus creating entirely new ways of consuming goods and services in the process. In addition, they lower the barriers for businesses and individuals to create wealth, altering the personal and professional environments of workers. These new platform businesses are rapidly multiplying into many new services, ranging from laundry to shopping, from chores to parking, from massages to travel.
On the whole, there are four main effects that the Fourth Industrial Revolution has on business—on customer expectations, on product enhancement, on collaborative innovation, and on organizational forms. Whether consumers or businesses, customers are increasingly at the epicenter of the economy, which is all about improving how customers are served. Physical products and services, moreover, can now be enhanced with digital capabilities that increase their value. New technologies make assets more durable and resilient, while data and analytics are transforming how they are maintained. A world of customer experiences, data-based services, and asset performance through analytics, meanwhile, requires new forms of collaboration, particularly given the speed at which innovation and disruption are taking place. And the emergence of global platforms and other new business models, finally, means that talent, culture, and organizational forms will have to be rethought.
Overall, the inexorable shift from simple digitization (the Third Industrial Revolution) to innovation based on combinations of technologies (the Fourth Industrial Revolution) is forcing companies to reexamine the way they do business. The bottom line, however, is the same: business leaders and senior executives need to understand their changing environment, challenge the assumptions of their operating teams, and relentlessly and continuously innovate.
The impact on government
As the physical, digital, and biological worlds continue to converge, new technologies and platforms will increasingly enable citizens to engage with governments, voice their opinions, coordinate their efforts, and even circumvent the supervision of public authorities. Simultaneously, governments will gain new technological powers to increase their control over populations, based on pervasive surveillance systems and the ability to control digital infrastructure. On the whole, however, governments will increasingly face pressure to change their current approach to public engagement and policymaking, as their central role of conducting policy diminishes owing to new sources of competition and the redistribution and decentralization of power that new technologies make possible.
Ultimately, the ability of government systems and public authorities to adapt will determine their survival. If they prove capable of embracing a world of disruptive change, subjecting their structures to the levels of transparency and efficiency that will enable them to maintain their competitive edge, they will endure. If they cannot evolve, they will face increasing trouble.
This will be particularly true in the realm of regulation. Current systems of public policy and decision-making evolved alongside the Second Industrial Revolution, when decision-makers had time to study a specific issue and develop the necessary response or appropriate regulatory framework. The whole process was designed to be linear and mechanistic, following a strict “top down” approach.
But such an approach is no longer feasible. Given the Fourth Industrial Revolution’s rapid pace of change and broad impacts, legislators and regulators are being challenged to an unprecedented degree and for the most part are proving unable to cope.
How, then, can they preserve the interest of the consumers and the public at large while continuing to support innovation and technological development? By embracing “agile” governance, just as the private sector has increasingly adopted agile responses to software development and business operations more generally. This means regulators must continuously adapt to a new, fast-changing environment, reinventing themselves so they can truly understand what it is they are regulating. To do so, governments and regulatory agencies will need to collaborate closely with business and civil society.
The Fourth Industrial Revolution will also profoundly impact the nature of national and international security, affecting both the probability and the nature of conflict. The history of warfare and international security is the history of technological innovation, and today is no exception. Modern conflicts involving states are increasingly “hybrid” in nature, combining traditional battlefield techniques with elements previously associated with non-state actors. The distinction between war and peace, combatant and noncombatant, and even violence and nonviolence (think cyberwarfare) is becoming uncomfortably blurry.
As this process takes place and new technologies such as autonomous or biological weapons become easier to use, individuals and small groups will increasingly join states in being capable of causing mass harm. This new vulnerability will lead to new fears. But at the same time, advances in technology will create the potential to reduce the scale or impact of violence, through the development of new modes of protection, for example, or greater precision in targeting.
The impact on people
The Fourth Industrial Revolution, finally, will change not only what we do but also who we are. It will affect our identity and all the issues associated with it: our sense of privacy, our notions of ownership, our consumption patterns, the time we devote to work and leisure, and how we develop our careers, cultivate our skills, meet people, and nurture relationships. It is already changing our health and leading to a “quantified” self, and sooner than we think it may lead to human augmentation. The list is endless because it is bound only by our imagination.
I am a great enthusiast and early adopter of technology, but sometimes I wonder whether the inexorable integration of technology in our lives could diminish some of our quintessential human capacities, such as compassion and cooperation. Our relationship with our smartphones is a case in point. Constant connection may deprive us of one of life’s most important assets: the time to pause, reflect, and engage in meaningful conversation.
One of the greatest individual challenges posed by new information technologies is privacy. We instinctively understand why it is so essential, yet the tracking and sharing of information about us is a crucial part of the new connectivity. Debates about fundamental issues such as the impact on our inner lives of the loss of control over our data will only intensify in the years ahead. Similarly, the revolutions occurring in biotechnology and AI, which are redefining what it means to be human by pushing back the current thresholds of life span, health, cognition, and capabilities, will compel us to redefine our moral and ethical boundaries.
Shaping the future
Neither technology nor the disruption that comes with it is an exogenous force over which humans have no control. All of us are responsible for guiding its evolution, in the decisions we make on a daily basis as citizens, consumers, and investors. We should thus grasp the opportunity and power we have to shape the Fourth Industrial Revolution and direct it toward a future that reflects our common objectives and values.
To do this, however, we must develop a comprehensive and globally shared view of how technology is affecting our lives and reshaping our economic, social, cultural, and human environments. There has never been a time of greater promise, or one of greater potential peril. Today’s decision-makers, however, are too often trapped in traditional, linear thinking, or too absorbed by the multiple crises demanding their attention, to think strategically about the forces of disruption and innovation shaping our future.
In the end, it all comes down to people and values. We need to shape a future that works for all of us by putting people first and empowering them. In its most pessimistic, dehumanized form, the Fourth Industrial Revolution may indeed have the potential to “robotize” humanity and thus to deprive us of our heart and soul. But as a complement to the best parts of human nature—creativity, empathy, stewardship—it can also lift humanity into a new collective and moral consciousness based on a shared sense of destiny. It is incumbent on us all to make sure the latter prevails
Jun 5, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.
By Mark Esposito
Curated by Helena M. Herrero Lamuedra
Institutions, both in the private and public sector, can always reap the public relations benefits of doing good, even while still accomplishing their goals. As resources become scarcer, a major way to enhance social performance is through resource conservation, which is being underutilized.
Although the traditional model of the linear economy has worked forever, and will never be fully replaced, it is essentially wasteful. The circular economy, in comparison, which involves resources and capital goods reentering the system for reuse instead of being discarded, saves on production costs, promotes recycling, decreases waste, and enhances social performance. When CE models are combined with IoT, internet connected devices that gather and relay data to central computers, efficiency skyrockets. As a result of finite resource depletion, the future economy is destined to become more circular. The economic shift toward CE will undoubtedly be hastened by the already ubiquitous presence of IoT, its profitability, and the positive public response it yields.
Unlike the linear economy which is a “take, make, dispose” model, the circular economy is an industrial economy that increases resource productivity with the intention of reducing waste and pollution. The main value drivers of CE are (1) extending use cycles lengths of an asset (2) increasing utilization of an asset (3) looping/cascading assets through additional use cycles (4) regeneration of nutrients to the biosphere.
The Internet of Things is the inter-networking of physical devices through electronics and sensors which are used to collect and exchange data. The main value drivers of IoT are the ability to define (1) location (2) condition (3) availability of the assets they monitor. By 2020 there are expected to be at least 20 million IoT connected devices worldwide.
The nexus between CEs and IoTs values drivers greatly enhances CE. If an institutions goals are profitability and conservation, IoT enables those goals with big data and analysis. By automatically and remotely monitoring the efficiency of a resource during harvesting, production, and at the end of its use cycle; all parts of the value chain can become more efficient.
When examining the value chain as a whole, the greatest uses for IoT is at its end. One way in which this is accomplished is through reverse logistics. Once the time comes for a user to discard their asset, IoT can aid in the retrieval of the asset so that it can be recycled into its components. With efficient reverse logistics, goods gain second life, less biological nutrients are extracted from the environment, and the looping/cascading of assets is enabled.
One way to change traditional value chain is the IoT enabled leasing model. Instead of selling an expensive appliance or a vehicle, manufacturers can willingly produce them with the intention of leasing to their customers. By imbedding these assets with IoT manufacturers can monitor the asset’s condition; thereby dynamically repairing the assets at precise times. In theory the quality of the asset will improve, since its in the producers best interest to make it durable rather than disposable and replaceable.
Even today, many sectors are already benefiting from IoT in resource conservation. In the energy sector, Barcelona has reduced its power grid energy consumption by 33%, while GE has begun using “smart” power meters that reduce customers power bills 10â€“20%. GE has also automated their wind turbines and solar panels; thereby automatically adjusting to the wind and angle of the sun.
In the built environment, cities like Hong Kong have implemented IoT monitoring for preventative maintenance of transportation infrastructure, while Rio de Janeiro monitors traffic patterns and crime at their central operations center. Mexico city has installed fans in their buildings which suck up local smog. In the waste management sector, San Francisco and London have installed solar-powered automated waste bins, that alert local authorities to when they are full; creating ideal routes for trash collection and reducing operational costs by 70%.
Despite the many advantages to this innovation, there are numerous current limitations. Due to difficulty in legislating for new technologies, Governmental regulation lags behind innovation. For example, because Brazil, China, and Russia do not have legal standards to distinguish re-manufactured products from used ones, cross-border reverse supply-chains are blocked. Reverse supply chains are also hurt by current lack of consumer demand , which is caused by low residual value of returned products. IoT technology itself, which collects so much data people’s private lives, generates major privacy concerns.
Questions arise like: who owns this data collected? How reliable are IoT dependent systems? How vulnerable to hackers are these assets? Despite the prevalence of IoT today, with 73% of companies invest in big data analytics, most of that data is merely used to detect and control anomalies and IoT remains vastly underutilized. Take an oil rig for example, it may have 30,000 sensors, but only 1% of them are examined. Underutilization of IoT in 2013 cost businesses an estimated 544 billion alone.
Even with these current barriers, because of the potential profits and increased social performance, the future implementation of an IoT enhanced CE is bright.
Estimates are that the potential profits from institutions adopting CE models could decrease costs by 20%, along with waste. The increase in efficiency combined with the goodwill generated by conservation is a win-win proposition for innovation, even with costs implementation, future monetary profitability will make it a no-brainer.
May 27, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.
By Christine Carter, PhD
Curated by Helena M. Herrero Lamuedra
We humans have become multi-tasking productivity machines. We can work from anywhere, to great effect. We can do more, and do it far more quickly, than we ever dreamed possible. Our fabulous new technologies buy us tons more time to crank out our work, get through our emails, and keep up with Modern Family. Time my great-grandmother spent making food from scratch, or hand-washing the laundry, we can now spend, say, driving our kids to their away games.
So now that we have so much more time to work and do things previous generations never dreamed possible (or even deemed desirable), why do we always feel starved for time?
The obvious answer is that we have so much more work, and expectations about what we will accomplish on a good day have expanded, but the number of hours in that day have stayed the same.
That’s true, but I also think there is something else at work here: We have gotten really, really bad at just doing nothing.
Stillness—or the ability to just sit there and do nothing—is a skill
Look around: We can’t even stand to wait in an elevator for 10 seconds without checking our smartphones. A new series of studies describe when the research subjects were put alone in a room, with nothing to do. The researchers describe their work:
In 11 studies, we found that participants typically did not enjoy spending 6 to 15 minutes in a room by themselves with nothing to do but think, that they enjoyed doing mundane external activities much more, and that many preferred to administer electric shocks to themselves instead of being left alone with their thoughts. Most people seem to prefer to be doing something rather than nothing, even if that something is negative.
You read that right: Many people (67 percent of men and 25 percent of women, to be exact) actually gave themselves painful electric shocks instead of just sitting there doing nothing—after they had indicated to the researchers that they would pay money NOT to be shocked again. One guy shocked himself 190 times in 15 minutes.
This brings me back to my main point: Stillness -or the abilily to just sit there and do nothing- is a skill, and as a culture we’re not practicing this skill much these days. When we can’t tolerate stillness, we feel uncomfortable when we have downtime, and so we cancel it out by seeking external stimulation, which is usually readily available in our purse or pocket. Instead of just staring out the window on the bus, for example, we read through our Facebook feed. We check our email waiting in line at the grocery store. Instead of enjoying our dinner, we mindlessly shovel food in our mouths while staring at a screen.
Here’s the core problem with all of this: We human beings need stillness in order to recharge our batteries. The constant stream of external stimulation that we get from our televisions and computers and smart phones, while often gratifying in the moment, ultimately causes what neuroscientists call “cognitive overload.” This state of feeling overwhelmed impairs our ability to think creatively, to plan, organize, innovate, solve problems, make decisions, resist temptations, learn new things easily, speak fluently, remember important social information, and control our emotions. In other words, it impairs basically everything we need to do in a given day.(i)
If we want to be high-functioning and happy, we need to re-learn how to be still.
But wait, there’s more: We only experience big joy and real gratitude and the dozens of other positive emotions that make our lives worth living by actually being in touch with our emotions—by giving ourselves space to actually feel what it is we are, well, feeling. In an effort to avoid the uncomfortable feelings that stillness can produce (such as the panicky feeling that we aren’t getting anything done), we also numb ourselves to the good feelings in our lives. And research by Matt Killingsworth suggests that actually being present to what we’re feeling and experiencing in the moment—good or bad—is better for our happiness in the end.
Here’s the main take-away: If we want to be high-functioning and happy, we need to re-learn how to be still. When we feel like there isn’t enough time in the day for us to get everything done, when we wish for more time… we don’t actually need more time. We need more stillness. Stillness to recharge. Stillness so that we can feel whatever it is that we feel. Stillness so that we can actually enjoy this life that we are living.
If you are feeling overwhelmed and time-starved:
2) Remember that what you need more than time (to work, to check tasks off your list) is downtime, sans stimulation.
As a society, we don’t just need to learn to tolerate stillness, we actually need to cultivate it. Fortunately, it’s not complicated.
How to Cultivate Stillness:
1) Try driving in silence, with your radio and phone off. (Encourage your children to look out the window while you drive them, instead of down at their devices.)
2) Eat meals out of the sight and sound of your phones and televisions.
3) Take a walk outside every day, preferably in nature, without a phone or music player. If it’s hard, just try a few minutes at a time, adding a few minutes each day.
4) Just practice; it’ll get easier, and the benefits will become more apparent.
5) Finally, forgive yourself the next time you find yourself staring blankly into space.
May 22, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.
By Lewis Robinson and Dr. Travis Bradberry
Curated by Helena M. Herrero Lamuedra
On the surface, multitasking seems like a winning proposition. After all, if you work on two projects at once, then you’ll finish twice as quickly, right? It’s a perfect situation! How can you lose?
Unfortunately, it doesn’t work that way. When you try to focus on two different projects, you divide your attention, and your brain has to expend additional energy each time you switch from one task to the other. Often, it will actually end up taking you longer to get the work done.
Negative Effects of Multitasking
Multitasking is not only an inefficient use of your time; it can actually have a negative impact, both on your work and your personal well-being. Let’s take a look at a few of the problems it can bring about:
If you’re splitting your attention between two, or three, or even more tasks at once, that means that you’re not able to focus on any one of them. The brain is an incredible tool, but it can only go so far before it starts to experience diminishing returns. Guy Winch, a PhD and author of the book Emotional First Aid: Practical Strategies for Treating Failure, Rejection, Guilt, and Other Everyday Psychological Injuries posits than we’re not really “multitasking” at all. Instead, we’re “task-switching.”
“When it comes to attention and productivity, our brains have a finite amount,” says Winch. “It’s like a pie chart, and whatever we’re working on is going to take up the majority of that pie. There’s not a lot left over for other things, with the exception of automatic behaviors like walking or chewing gum.” You’re never really able to focus on one task enough to get “in the zone.”
This may seem counterintuitive, but the more tasks you work on at a time, the less work you will get done. Most people resort to multitasking as a way to get more done, not less. But working distracted can lead to slower performance and more mistakes. In fact, shifting back and forth between two or more tasks create mental blocks where your brain has to shift its focus. These blocks can cost as much as 40% of your regular productive time.
Multitasking increases stress, which isn’t always bad in the short-term, but can lead to serious complications if it goes on for too long. Chronic stress causes your body to produce more cortisol, which can bring on physical complications, such as heart issues, high blood pressure, and a diminished immune system.
What to Do About It
Even when you recognize the negative effects multitasking can have on your work, it’s still tempting to try to work on several jobs at once.
Delegate as Needed
Instead of splitting one mind among several tasks, try the opposite tactic. Spread the load out a bit and assign certain jobs to other team members who may be able to lend a hand. Put a work structure in place with the goal of keeping any particular employee’s queue from filling up too much.
Manage Your (and Your Team’s) Workflow
This is essentially just another way of saying, “Plan ahead.” Keep an eye on what projects you and your team have coming down the pipeline. If you know there will be a huge project that you will need to focus all of your attention on in the next month, do what you can to clear other tasks from that time. Prepare yourself and your team members for any eventuality.
This also means setting priorities. If everything you send to your team is marked “ASAP,” then they have no way to know which tasks to tackle first. This usually leads to employees bouncing back and forth between each task, trying to get them all done quickly. Eventually, instead of everything getting done immediately, nothing ends up getting done.
Take Regular Breaks
Oddly enough, taking breaks can actually lead to more getting done. If you are constantly working, with no end in sight, it’s easy to get burned out. Shorter bursts of work are more productive, so you and your team should take a break anywhere between every 50 minutes and every 90 minutes. This will give you the occasional moment to unwind from the constant focus, leading to better results over the long term.
The next time you tell yourself that you’ll sleep when you’re dead, realize that you’re making a decision that can make that day come much sooner. Pushing late into the night is a health and productivity killer.
According to the Division of Sleep Medicine at the Harvard Medical School, the short-term productivity gains from skipping sleep to work are quickly washed away by the detrimental effects of sleep deprivation on your mood, ability to focus, and access to higher-level brain functions for days to come. The negative effects of sleep deprivation are so great that people who are drunk outperform those lacking sleep.
Why You Need Adequate Sleep to Perform
We’ve always known that sleep is good for your brain, but new research from the University of Rochester provides the first direct evidence for why your brain cells need you to sleep (and sleep the right way—more on that later). The study found that when you sleep your brain removes toxic proteins from its neurons that are by-products of neural activity when you’re awake. Unfortunately, your brain can remove them adequately only while you’re asleep. So when you don’t get enough sleep, the toxic proteins remain in your brain cells, wreaking havoc by impairing your ability to think—something no amount of caffeine can fix.
Skipping sleep impairs your brain function across the board. It slows your ability to process information and problem solve, kills your creativity, and catapults your stress levels and emotional reactivity.
What Sleep Deprivation Does to Your Health
Sleep deprivation is linked to a variety of serious health problems, including heart attack, stroke, type 2 diabetes, and obesity. It stresses you out because your body overproduces the stress hormone cortisol when it’s sleep deprived. While excess cortisol has a host of negative health effects that come from the havoc it wreaks on your immune system, it also makes you look older, because cortisol breaks down skin collagen, the protein that keeps skin smooth and elastic. In men specifically, not sleeping enough reduces testosterone levels and lowers sperm count.
Too many studies to list have shown that people who get enough sleep live longer, healthier lives, but I understand that sometimes this isn’t motivation enough. So consider this—not sleeping enough makes you fat. Sleep deprivation compromises your body’s ability to metabolize carbohydrates and control food intake. When you sleep less you eat more and have more difficulty burning the calories you consume. Sleep deprivation makes you hungrier by increasing the appetite-stimulating hormone ghrelin and makes it harder for you to get full by reducing levels of the satiety-inducing hormone leptin. People who sleep less than 6 hours a night are 30% more likely to become obese than those who sleep 7 to 9 hours a night.
How Much Sleep Is Enough?
Most people need 7 to 9 hours of sleep a night to feel sufficiently rested. Few people are at their best with less than 7 hours, and few require more than 9 without an underlying health condition. And that’s a major problem, since more than half of Americans get less than the necessary 7 hours of sleep each night, according to the National Sleep Foundation.
A recent survey of Inc. 500 CEOs found that half of them are sleeping less than 6 hours a night. And the problem doesn’t stop at the top. According to the Centers for Disease Control and Prevention, a third of U.S. workers get less than 6 hours of sleep each night, and sleep deprivation costs U.S. businesses more than $63 billion annually in lost productivity.
Doing Something about It
Beyond the obvious sleep benefits of thinking clearly and staying healthy, the ability to manage your emotions and remain calm under pressure has a direct link to your performance.
When life gets in the way of getting the amount of sleep you need, it’s absolutely essential that you increase the quality of your sleep through good sleep hygiene. There are many hidden killers of quality sleep.
There are some strategies to help identify these killers and clean up your sleep hygiene.
Moderate Caffeine (at Least after Lunch)
You can sleep more and vastly improve the quality of the sleep you get by reducing your caffeine intake. Caffeine is a powerful stimulant that interferes with sleep by increasing adrenaline production and blocking sleep-inducing chemicals in the brain.
When you do finally fall asleep, the worst is yet to come. Caffeine disrupts the quality of your sleep by reducing rapid eye movement (REM) sleep, the deep sleep when your body recuperates most. When caffeine disrupts your sleep, you wake up the next day with a cognitive and emotional handicap. You’ll be naturally inclined to grab a cup of coffee or an energy drink to try to make yourself feel more alert, which very quickly creates a vicious cycle.
Avoid Blue Light at Night
Short-wavelength blue light plays an important role in your mood, energy level, and sleep quality. In the morning, sunlight contains high concentrations of this “blue” light. When your eyes are exposed to it directly (not through a window or while wearing sunglasses), the blue light halts production of the sleep-inducing hormone melatonin and makes you feel more alert. This is great, and exposure to a.m. sunlight can improve your mood and energy levels.
In the afternoon, the sun’s rays lose their blue light, which allows your body to produce melatonin and start making you sleepy. By the evening, your brain does not expect any blue light exposure and is very sensitive to it. The problem this creates for sleep is that most of our favorite evening devices—laptops, tablets, televisions, and mobile phones—emit short-wavelength blue light. This exposure impairs melatonin production and interferes with your ability to fall asleep as well as with the quality of your sleep once you do nod off. When you confuse your brain by exposing it in the evening to what it thinks is a.m. sunlight, this derails the entire process with effects that linger long after you power down.
When you work in the evening, it puts you into a stimulated, alert state when you should be winding down and relaxing in preparation for sleep. Recent surveys show that roughly 60% of people monitor their smartphones for work emails until they go to sleep. Staying off blue light-emitting devices (discussed above) after a certain time each evening is also a great way to avoid working so you can relax and prepare for sleep, but any type of work before bed should be avoided if you want quality sleep.
Many people who learn to meditate report that it improves the quality of their sleep and that they can get the rest they need even if they aren’t able to significantly increase the number of hours they sleep. At the Stanford Medical Center, insomniacs participated in a 6-week mindfulness meditation and cognitive-behavioral therapy course. At the end of the study, participants’ average time to fall asleep was cut in half (from 40 to 20 minutes), and 60% of subjects no longer qualified as insomniacs. The subjects retained these gains upon follow-up a full year later. A similar study at the University of Massachusetts Medical School found that 91% of participants either reduced the amount of medication they needed to sleep or stopped taking medication entirely after a mindfulness and sleep therapy course. Give mindfulness a try. At minimum, you’ll fall asleep faster, as it will teach you how to relax and quiet your mind once you hit the pillow.
Bringing It All Together
We all know someone who is always up at all hours of the night working or socializing, and is the number one performer at. the office. Watch out: this person is underperforming, may be not yet. After all, the only thing worth catching up on at night is your sleep.