There is a palpable, ominous feeling in the air that something huuuge and unstoppable is about to happen to our world — a tectonic shift in our understanding of who we are, and how we define our reality, that we excitedly anticipate, but are deathly afraid of at the same time.
Whatever this “next level” in humanity’s evolution is, this much is clear, it will be powered by Artificial Intelligence in unimaginable ways that will potentially propel us to utopian heights, or destroy us in a great ball of fire, depending on who you ask [insert Elon Musk “AI robots will kill us” quote here!].
In short, AI has arrived. Either get on the train, or get run over by it. Most people, however, don’t know what to do — they’re sort of stuck standing in the tracks, like deer in headlights, with the train barreling towards them full speed ahead.
The problem is, the average person has little understanding of what AI reallyis or what its impact on society (and their personal lives) will be in the immediate and long-term future. And with even the most revered AI thought-leaders warning that we need to protect ourselves from creating AI that could eventually wipe us out, it’s no wonder people are confused and concerned about AI’s potential and our future. It doesn’t help that, for the past 50 years, we’ve been fed a steady diet of sci-fi movies and tv shows depicting robots enslaving and/or destroying humanity.
Should we welcome AI? Fear it? Make an effort to understand it? Sit back, toss a cosmic coin, and hope that our future luckily falls on the “humanity wins” side? Or just pile into a space ship and slingshot ourselves to Mars?
In an interview on the Singularity 1 on 1 podcast, Gerd Leonhard (Founder of The Futures Agency and author of Tech vs Humanity), explained how his clients in Fortune 500 companies (predominately CEOs) have been expressing mixed emotions about the rise of AI:
“In the last couple of years, the biggest conversations I’ve had with people is that they’re really worried about technology basically changing every component of our lives — many of them are good (for example, disruption of businesses and new ideas) — but also changing who we are as people, for example, by augmenting ourselves, forcing ourselves to use technology to get better faster, and eventually merging with machines. This thought of merging with machines is something that’s not as far-fetched as it may seem. But it’s also a very accepted idea in the US and Europe. It became a very big topic with my clients to ask, “Isn’t this all going to end with the robots killing us?” — Gerd Leonhard
But this doesn’t tell the whole story of our attitudes towards Artificial Intelligence. A new research study (done by Weber Shandwick), published this week by Harvard Business Review, What Do People — Not Techies, Not Companies — Think About Artificial Intelligence?, tells a far more nuanced, and conflicting, story.
Despite the fact that the research (which surveyed 2,100 consumers in five global markets in the U.S., Canada, the UK, China, and Brazil, and 150 CMOs in the U.S., UK, and China), indicates that more people view AI positively than negatively (45% to 7%), consider the following 4 additional findings:
People vary in how they understand AI. Two-thirds of those surveyed say they know something about AI, although only about two in 10 (18%) say that they know a lot. One-third acknowledged knowing nothing about AI. We found that by far the most common first impression of AI is “robots,” as 22% of respondents said.
Trust in AI depends on experience and expertise. When it comes to accurate sources of information about AI, consumers reported that the most-credible information will come from hands-on experience (46%) and technology experts (46%). Consumers also reported that when dealing with AI, they would rely on academics and experts in specialized fields (39%) and professional product reviews (38%). They gave less weight to the advice of friends, family, and other personal relationships (28%). Despite the seeming worldwide backlash against elites and their expertise, consumers are still willing — at least with respect to AI — to turn to those who are most informed.
Consumers encounter AI on a frequent basis. When asked where their overall impression of AI comes from, 80% of global consumers mention some form of media — internet, social media, TV, movies, and the news. Nearly six in 10 (59%) said they had seen or read something about AI or had some personal experience with it in the 30 days prior to taking our survey. Notably, 82% of these consumers reported that their recent interaction had left them with a positive impression.
Without a doubt, the probability of job loss due to AI was the largest concern among respondents. When asked whether AI is more likely to create jobs or lead to job loss, more consumers said job loss (82%) than job creation (18%).
Our understanding of AI is all over the map and lacking (which is to be expected since we’re in the early stages of AI’s development), but we’re exposed to enough information sources to learn about AI advancements on a regular basis (though we feel those sources are inadequate, possibly overwhelming, and would prefer experts to guide us). We expect AI to become more prevalent in our lives, and we welcome its eventual all-consuming invasiveness, but we also have real concerns that have yet to be addressed by the people and companies creating the technology or the government.
The most compelling results of the Weber Shandwick study is that 82% of respondents believe that AI will likely lead to more job loss, than job creation. That’s an astonishingly high number and, although it is likely tainted with people’s current anxieties over the lack of job-recovery after the 2008 crash, it belies a massive problem on the horizon (and the need for more research on people’s attitudes towards AI).
Unless they are given a plan of action soon on how to protect their livelihoods, and guidance on how to prepare for a future where “robots” begin to replace people in all levels of the workplace, we can expect these reasonable concerns by the general population to gradually turn into anxiety and, eventually, full-on panic as we are bombarded more frequently with news about robots replacing human beings globally (“A big Dutch bank is replacing 5,800 people with machines, at a cost of $2 billion”). The worst part about the coming job losses isn’t just the potential economical crisis, it’s the existential crisis to our ego (the real elephant in the room that the Weber Shandwick research doesn’t address).
Of course, Artificial Intelligence will also offer us plenty of mind-blowing opportunities to create the next generation of innovative businesses that will bring with it more jobs. But the mixed signals we’re getting from the top, the lack of information and public discourse on AI, and the lack of a cohesive vision of the future for us to collectively work towards, only feeds into our insecurities and fears, not our hopes and dreams.
Last month, I was asked to participate in a survey where influencers were asked questions about the impact of emerging technologies on our society and how we should prepare for it, Ask the Robot Question, Expert Activists & Author Comments (produced by Future Left). In my response, I noted that our greatest challenge moving forward will be educating and inspiring the general population on the coming disruptive technologies.
The Obama administration has only just begun to address this matter with the release of a report earlier this month, Preparing for the Future of Artificial Intelligence, outlining how we as a nation should begin to prepare for our AI future. The report focused on the following topics:
- Applications of AI for Public Good
- AI and Regulation
- Research and Workforce
- Economic Impacts of AI
- Fairness, Safety, and Governance
- Global Considerations and Security
- Preparing for the Future
After reviewing the report, I was very surprised to see that there was no mention of public outreach programs or informal education. While the report does outline how universities, community colleges and other formal education institutions will need to educate students for their AI-future, it fails to offer a vision for how millions of Americans, who won’t necessarily go back to college, will be educated on the next technological revolution, and how they will prepare for alternative jobs or long-term unemployment (or, if people DO decide to opt for re-education in formal institutions like colleges, how they will pay for it).
The government can’t offer a vision and roadmap for the future alone. Tech companies, employers, investors, and brands need to step up big time to facilitate more education, entertainment, and discourse around the topic AI. But even this feels inadequate for the challenges we face ahead.
The Promised Land
Earlier this year, the World Economic Forum held its annual meeting of the world’s most influential business people, economists, politicians, and assorted thought-leaders, in Switzerland to discuss issues that are driving our global economy and innovation. The focus of Davos 2016 was the Fourth Industrial Revolution: What It Means, How to Respond.
In an article for Foreign Affairs, Klaus Schwab (Founder and Executive Chairman of the World Economic Forum), outlined the Davos 2016 agenda, and the challenges and opportunities for our world as we enter the Fourth Industrial Revolution. In summation, he offered the following directive:
In the end, it all comes down to people and values. We need to shape a future that works for all of us by putting people first and empowering them. In its most pessimistic, dehumanized form, the Fourth Industrial Revolution may indeed have the potential to “robotize” humanity and thus to deprive us of our heart and soul. But as a complement to the best parts of human nature — creativity, empathy, stewardship — it can also lift humanity into a new collective and moral consciousness based on a shared sense of destiny. It is incumbent on us all to make sure the latter prevails. — Klaus Schwab
This is vaguely reminiscent of comments made this past Summer by Eric Schmidt (Exec. Chairman of Alphabet, Google’s parent company) and Sebastian Thrun (President and Chairman of Udacity), in a commentary piece they co-authored in Fortune, Let’s Stop Freaking Out About Artificial Intelligence. In the piece, they waxed poetically about the wondrous, orgasmic potential of AI, while brushing aside any possible downsides of the technology:
We believe AI has the potential not only to free us from the negative, but to enhance what’s most positive about us as human beings… We could all be like Sedol, harnessing AI to improve the things we do every day. “Imagine a world where clever apps and devices could help us recognize every person we’ve ever met, recall anything we’ve ever said, and experience any moment we’ve ever missed… Sophisticated AI-powered tools will empower us to better learn from the experiences of others, and to pass more of our learnings on to our children… For us, ultimately the hypothetical, long-term concerns are far outweighed by our excitement for the endless possibilities… We can’t wait to see AI free us of mindless, menial work and empower us to unfold our true creative powers.” — Schmidt & Thrun
There is a big gulf between the way thought-leaders and technology innovators view Artificial Intelligence and the way participants in Weber Shandwick study (ordinary, non-technical people) view Artificial Intelligence. It is imperative that we begin to close this gap.
So far, it feels as if we are being led to the rushing waters of the river and being directed to cross over to the promised land on the other side, without a plan, or a Moses.
Help Wanted: Artificial Intelligence Prophets
Prophets rise during times of uncertainty and turbulence to offer the masses a reliable, relatable, and steady vision of the future that will carry them from their present, painful circumstances, to an elevated, utopian future. The Prophet’s gift is communicating a vision for the future that resonates with people’s deepest desires and calms their fears, while preparing them for the sacrifices and obstacles to come as they work towards a collective goal that will benefit everyone.
By nature of being bold, courageous, unapologetic messengers of disruption, Prophets instill in the general population FOMO (fear of missing out) on the collective dream. They present a challenge to people to rise above their lower natures and to resist the urge to give up.
I recently watched videos from Davos 2016, where some of the greatest minds in technology, economics and business, discussed how we would move forward boldly towards a new era in human achievement. I heard some interesting things, thought-provoking things here and there. But ultimately, something was missing — something that I think we desperately need to help us to place all of this dreamy, fantasy talk about AI into substantive context, and to give us something tangible to grab onto — clarity, cohesiveness, inspiration, and a plan. Of course, this event isn’t targeting the average consumer, but that’s also part of the problem.
I couldn’t help but to think: If Artificial Intelligence is our new religion, where are the AI Prophets? Where are the voices of innovation and change that can crystalize our AI future and nudge us towards it?
As it turns out, being a prophet ain’t easy. Even when you’re trying to help people, they can be thankless, petty, and work against their own cause and growth. Maybe that’s why religious prophets have gone out of fashion. The world used to be full of them but, with digital distractions replacing soulful introspection, prophets have been pushed aside. Who needs a prophet predicting the future when you have big data and predictive analytics? Do we really need inspiration from some dude in a robe, when we have Netflix and one-touch pizza delivery on our mobile phones? Probably not.
But the coming transition as a result of Artificial Intelligence technologies is, I believe, different than anything we’ve experienced before. It will force us to redefine who we are, what really matters, our morals, and whether we need the very things we’ve been told, for thousands of years, are necessary to a happy, fulfilling life — like steady work. We will need to figure out what to do, how to keep the population fed and content, how to reorder society, change old laws and create new ones to support innovation, and how to, once again (sigh), keep us from destroying ourselves with the technology we’ve created.
Maybe it’s time to bring prophets back.
There is a desperate need for vision, guidance, and education in Artificial Intelligence — the ideal prophet(s) would be someone who can shoot from the hip unapologetically, be empathetic to people’s immediate inclination to recoil at uncomfortable change due to innovation, be ideologically flexible, and be a steady source of knowledge and support — without interrupting our media consumption (yes, we need to evolve, but when Game of Thrones is on, the Promised Land will have to take a backseat).
There’s still time for voices to emerge that can offer much-needed structure to the chaotic and confusing AI landscape — but the future won’t wait for us. We’ll either be prepared for it, or be run over by it, and quite a few people will be left behind. Calling all prophets.
Davos 2016: Final Thoughts from Artificial Intelligence Thought-Leaders
One of the Davos 2016 videos, “The State of Artificial Intelligence.” featuring four of the most respected minds in AI (in academia and business), moderated by Connyoung Jennifer Moon, Chief Anchor and Editor-in-Chief, Arirang TV & Radio, Republic of Korea. Panel speakers included Andrew Moore (Carnegie Mellon), Ya-Qin Zhang (President of Baidu), Matthew Grob (CTO, Qualcomm), Stuart Russell (Artificial Intelligence pioneer, University of California, Berkeley)
At the end of the panel discussion, Ms. Moon asked the panelists for final thoughts they would like to share with the audience about the future of Artificial Intelligence. I’ve posted each of their final thoughts below.
Do their visions and messages about the future of Artificial Intelligence resonate with you? Let me know in the comments.
“We’re in a really exciting time and we now have hundreds thousands young computer scientists around the world. The thing that I like about them is that they are working towards using this advanced technology for helping us with many of the problems we’ve got: social problems, political problems, medical problems. In my opinion, it’s one of the bright benefits that we have at the moment — that artificial intelligence is being used for good across the planet and I very much encourage youngsters to get into this area. It’s the one thing that’s closest to working magic at the moment.” — Andrew Moore ( Dean, School of Computer Science, Carnegie Mellon University, USA)
Largest search engine in China and investing a lot in AI. If you look at all the technologies for the next decade, AI is certainly the foundation and the engine to drive a lot of things. So if you have a startup and invest in business, consider AI, it’s a necessity for everything else. — Ya-Qin Zhang (President, Baidu.com, People’s Republic of China)
I couldn’t agree more. I think the video got it right that “this is going to change our lives.” It really is and for the most part it’s going to be for the good. I’m not concerned about downloading my consciousness today. That might not happen for a hundred years — I’ll never say never. But advancements that are pragmatic, useful, improve the performance of our products, improve the medical devices diagnostics, those are all upon us, some of them are happening already. It’s just a very overall positive movement and we’re very happy to be a part of it. — Matthew Grob (Executive Vice-President and Chief Technology Officer, Qualcomm, USA)
So the way I think about it is, everything good that we have in our lives, that civilization consists of, is from our intelligence, it’s not the result of our long teeth or big scary clause. So if AI, as it seems to be happening, can amplify our intelligence, can provide tools that make us, in effect, much more intelligent than we have been, then we could talking about a golden age for humanity possibly the illumination of disease, poverty, solving the climate change problem, all being facilitated by the use of this technology. So I’m extremely optimistic that the upside is very great. And that’s the reason why we need to make sure the downside doesn’t occur. — Stuart Russell (Professor of Computer Science, University of California, Berkeley, USA)