TECH 2025 Launches Mission AI to Bridge the Gap in AI Research Between Researchers and the General Public

Mission AI is about giving people access to the research that is defining our future so that they can participate in the problem-solving with the research community. Mission AI is AI research for the people, by the people.” — Charlie Oliver (CEO, Tech 2025)


New York, NY, June 21, PRESS RELEASE — Building on in its mission to provide an platform for people to learn about and discuss the implications of emerging technologies, and to have a voice in developing the technologies of the future, Tech 2025 has announced the launch of “Mission AI Research” – an initiative to introduce its community and the general public to practical, interdisciplinary, AI and machine learning research with an emphasis on social science.

Through a combination of live think tank events, online workshops and live chats, and community-funded research, Mission AI will provide a platform for people (regardless of their background and level of technical expertise) to participate in AI research and to engage in discussions and exercises on practical, applied AI research to help them think critically about some of the big problems we are facing implementing AI – particularly problems related to AI risk, ethics, and algorithmic fairness. Additionally, Mission AI will provide a platform for post-grad research assistants to engage with the general public on their research and ideas (an aspect of AI research that is currently missing).

The new initiative comes at a crucial time as governments and big tech companies around the world race to develop and harness the most powerful AI technologies while, at the same time, we are beginning to see the potentially negative impact of unleashing algorithms that are biased or that make potentially devastating decisions that cannot fully explain (black boxes). Yet and still, tech companies are struggling to develop safeguards to keep AI from being used maliciously and destructively by bad actors in the future, while the general public is unaware of these threats.

Hype cycles aside, the pace of AI research and development is, without a doubt, accelerating. According to research by IDC, worldwide spending on artificial intelligence and cognitive systems will increase to $46 billion in 2020, up 768% from 2016. While McKinsey reports that big tech companies including Google, Microsoft, Facebook, IBM, Apple, Amazon and Baidu spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment (the other 10% was spent on acquisitions). Also driving this flurry of AI investment are VCs who, worldwide, have poured more than $10.8 billion into AI and machine learning companies, driving a need for companies to hire employees (technical non-technical) who can help understand and apply these technologies.

But while big tech companies and education institutions are pouring gobs of money into AI research, and have all launched organizations to address the most pressing concerns around AI ethics and safety (like Open AI, AI Now, Deep Mind, to name a few), these initiatives have not yet included public education and outreach as a substantive part of their mission.

This glaring omission in the AI research ecosystem is precisely the gap that Mission AI seeks to fill. Charlie Oliver, Founder and CEO of Tech 2025, believes this is an imperative that can no longer be ignored or delayed:

You cannot change the world for the better with AI without educating and engaging the general public in all aspects of its development and implementations. If companies and governments don’t begin to invest heavily in the problem-solving power of ordinary people to help figure this stuff out, we may all pay a heavy price in the long run. What we seek to do with Mission AI is to use compelling research to empower and inspire people to participate substantively in the discussion, and to generate ideas, solutions and new models. People have been telling us they want to do more. Mission AI Research will bridge the gap between AI researchers, practitioners, corporations implementing these technologies and their workforce, and the general public.” — Charlie Oliver

Since it first launched in January 2017, Tech 2025 has been planting seeds of this mission all along by introducing its community to AI research and researchers at numerous think tanks.

Ultimately, the goal of Mission AI is to help people to become more literate, confident and engaged in AI research so that they can become more informed, creative problem-solvers in the workplace (where they will increasingly be expected to work with AI technologies), as consumers in need of privacy/data protections, and in government (where new legislation is desperately needed to carry us into this next era of innovation). Just as important, Mission AI will give researchers a platform for engaging with (and listening to!) the general public’s feedback on AI and their research.

Mission AI will:

  • Inject diversity and the voice of the general public in the ongoing debate about how to make AI safe and equitable;
  • Educate the public on practical AI research ecosystem and how research is used to solve problems and create solutions in business, education, government, and consumer products;
  • Keep the community up to date on the most consequential research being published and facilitate discussions around it;
  • Bridge the gap between the general public and the AI research community so that there can be a sharing of knowledge and exchange of ideas between the academic/technical practitioners and the non-technical public;
  • Provide a platform for researchers (particularly research assistants) to engage with the general public about their research and ideas in ways they currently cannot in institutions; and
  • Publish AI research funded by the general public to present to the professional AI research community and companies.

The program launches in June with a Summer-long series of live think tank events and online events. Advisors include Dr. Andrew Maynard (Director, Risk Innovation Lab at Arizona State University), Deborah Bryant (Senior Director Open Source and Standards at Red Hat), Randi Williams (Graduate Research Assistant at MIT Media Lab). Additional advisors and partners to be announced in the coming weeks.

Dr. Maynard sees the need for a platform like Mission AI now more than ever. Notes Maynard:

AI has tremendous potential to improve our lives. But it’s by no means a given that experts will have the insights needed to develop the technology responsibly, or that people potentially affected by it will have a say in how AI is developed and used. Mission AI provides a critical nexus between AI experts and members of the public that will help inform and guide socially responsible AI research and development, while engaging a wider audience in the tremendous excitement and transformative power associated with this technology.” — Andrew Maynard

For questions, additional information or interviews, contact Rebeca Cornejo (Media Coordinator, Tech 2025) at theteam[at]



Dr. Andrew Maynard

Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society at the University of Arizona, and Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability. Andrew Maynard is a Professor in the School for the Future of Innovation in Society at Arizona State University, and Director of the Risk Innovation Lab – a unique center focused on transforming how we think about and act on risk, in the pursuit of increasing and maintaining “value”.

He was previously Chair of the Environmental Health Sciences Department in the University of Michigan School of Public Health. Maynard’s research and professional activities focus on risk innovation, and the responsible development and use of emerging technologies, including nanotechnology and synthetic biology. He is widely published, has testified before congressional committees, has served on National Academy panels and serves on the World Economic Forum Global Future Council on Technology, Values and Policy.

Maynard writes widely on the intersection between emerging technologies and society, including regular articles on the news website The Conversation. Courses taught by Maynard have included risk assessment, risk innovation, science communication, risk and the future, environmental health policy, and entrepreneurial ethics. He also lectures widely on technology innovation and responsible development. Maynard a well-known science communicator, and works closely with and through conventional and new media to connect with audiences around the world on technology innovation and the science or risk. He is the creator of the YouTube channel Risk Bites, and blogs at His Twitter handle is @2020science. Maynard is currently working on the book The Moviegoer’s Guide to the Future.

Deborah Bryant

Deborah Bryant, Senior Director of Open Source and Standards at Red Hat. Deborah is an acknowledged international expert in the adoption and use of open source software and open development models as well as open source community health. Her personal interests include the ethical use of AI and Machine Learning as well as industry accountability for use of personal information.

Prior to her deep involvement in open source, Deborah helped build Oregon emerging technology start-up; parallel and high-speed computing and commercialized internet and web applications in the 80s, commercial wide area networks, advanced telecommunications and data/voice convergence in the 90s.

Deborah’s public sector background includes ten years in state government; as a civil servant in Oregon’s executive branch as Deputy State CIO; in public office as an elected official in coastal Oregon; at Oregon State University building the Open Source Lab.

Deborah serves on numerous boards with a public trust agenda and an emphasis on open source software as enabling technology. She currently serves as Board Adviser for the Open Source Elections Technology (OSET) Foundation and is Board Director Emeritus for Open Source Initiative (OSI), the international standards organization for open source software.

She has authored and contributed to numerous published studies related to open source in the public sector, the adaptation of new collaborative models for economic development, and the use of open source software in the US energy sector for cybersecurity.

Deborah received the prestigious industryOpen Source Award in 2010 in recognition of her contribution to open source communities and for her pioneering advocacy of the use of open source software in the public sector.

Randi Williams

Graduate Research Assistant at MIT Media Lab. Randi is currently pursuing a a PhD in Media, Arts, and Sciences with a concentration in Robotics and Human-Robot Interaction at MIT. She plans to use her research to develop assistive devices and collect behavioral information for well-being and educative applications. Her recent research, Hey Google, is it OK if I eat you?: Initial Explorations in Child-Agent Interaction (June 2017), co-authored with Stefania Druga, explores how children perceive and interact with autonomous technologies that are becoming more embedded in their daily lives.

She has worked at MIT Lincoln Laboratory, MIT’s Media Lab, NASA Jet Propulsion Lab, and Jawbone. At MIT Lincoln Laboratory she worked in the Decision Support and Informatics Group doing video analytics through machine learning. At MIT’s Media Lab she worked for the Fluid Interfaces group and developed web and mobile applications to help busy individuals automatically track key aspects of their day such as sleep, commute, movement, and location. She conducted research in the robotics department of NASA JPL as a Caltech Southern California Edison Company MURF Fellow. Additionally, she worked at Jawbone as a CODE2040 Fellow in the Tools Team (web development). At UMBC, Randi founded and led hackUMBC, an organization that brought hackathons to UMBC’s campus.

She is passionate about helping others learn, and exploring other cultures and the experiences that shape people into who they presently are.

Henry L. Greenidge

Henry L. Greenidge is an experienced attorney and policy advisor who has focused on urban policy related to broadband, transportation, energy and sustainability. Currently, Henry leads East Coast Government & Community Relations for Cruise Automation, a subsidiary of General Motors focused on developing self driving cars.

Previously, Henry served as Assistant Director of External Affairs in the New York City Mayor’s Office where his portfolio included press, communications, and legislative affairs for the City’s $20 billion dollar sustainability program. During the Obama Administration, Henry held policy, budget, and legal roles at the Federal Communications Commission, U.S. DOT, and the White House Office of Management & Budget.
Henry has been recognized by New York University as an Emerging Leader in Transportation, by City & State Magazine as a 2016 40 Under 40 Rising Star, and by the New York Urban League as a 2017 Trailblazer. Henry holds a Juris Doctor from the University of Baltimore, and a Bachelor of Arts from the Scripps Howard School of Journalism and Communications at Hampton University where he graduated with honors.