Big Tech Companies Selling Lethal Autonomous Weapons
to the US Military: the New Rules & Ethics of War
a Tech 2025 Think Tank
Thursday, April 4 (6pm-8pm)
The Williamsburg Hotel, Brooklyn
Are we prepared for the future of war with AI and who
gets to define the new rules?
Guest Speaker: Patrick Tucker, Technology Editor (Defense One), Futurist and Author of The Naked Future: What Happens in a World That Anticipates Your Every Move?
Join us for our April Tech 2025 Think Tank, sponsored by The Williamsburg Hotel! Our think tanks are interactive, open discussions, and an exchange of information and ideas between our expert guest speaker(s) and the audience — everyone gets the opportunity to share their opinions and ideas. We encourage and facilitate thought-provoking, respectful discourse as we all struggle to understand sweeping changes that are coming to our world!
About This Discussion
With the rise of new and frighteningly powerful artificial intelligence (AI), automatic weaponry, military defense just became more complicated than anyone could’ve imagined just five years ago. The Pentagon seeks to implement these advanced technologies as soon as possible (no doubt feeling pressure to beat American enemies to the punch).
Over the past two years, Amazon, Google, and Microsoft, have entered into contracts with the Pentagon to develop and sell advanced technologies for autonomous weapons to be used in combat, which has shocked and appalled AI research scientists, employees of the tech companies, and the general public who have expressed various degrees of outrage and protest:
- Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems (The Verge)
- ‘The Business of War’: Google Employees Protest Work for the Pentagon (The NY Times)
- ‘We did not sign up to develop weapons’: Microsoft workers protest $480m HoloLens military deal (CNBC News)
- Amazon employees demand company cut ties with ICE (Washington Post)
- How tech employees are pushing Silicon Valley to put ethics before profit (Vox)
For the Department of Defense and big tech companies, this has been nothing short of a PR disaster as media outlets publish headlines that feed into our darkest, dystopian nightmares about AI (“The US Army wants to turn tanks into AI-powered killing machines“). But there is far more at stake here than just bad press. As other nations and our enemies develop these technologies at a furious pace, the future of our nation’s security hangs in the balance as we struggle to find the happy medium between useful, beneficial, and ethical AI and autonomous killing machines.
How do we define the new rules of war and our ethics for the next military revolution powered by AI? And how should the Military and tech companies navigate this volatile topic with the general public. Watch a fictional dramatization of how autonomous weapons might be used in the future (Slaughterbots) HERE
Our expert guest speaker, Patrick Tucker (Technology Editor for Defense One, Futurist and author of The Naked Future: What Happens in a World That Anticipates Your Every Move?) will join us for this special think tank exploring how we as a society will define the future of war.
Lite food, beverages, beer and wine included.
ABOUT PATRICK TUCKER
Patrick Tucker, Technology Editor, Futurist and Author
Patrick Tucker is Technology Editor for Defense One and author of The Naked Future: What Happens in a World That Anticipates Your Every Move? Previously, Tucker was deputy editor for The Futurist for nine years. Tucker has written about emerging technology in Slate, The Sun, MIT Technology Review, Wilson Quarterly, The American Legion Magazine, BBC News Magazine, Utne Reader, The Atlantic, NextGov, and elsewhere.
Our discussion will cover Patrick’s recent writings on this topic for Defense One including US Military Changing ‘Killing Machine’ Robo-tank Program After Controversy:
“A key area of controversy is over what is sometimes called Rapid Target Acquisition, or RTA — a method of finding targets, putting little red digital boxes around them on a screen, and putting a bullet, missile, or bomb into that box. It’s an emerging capability fraught with difficult ethical considerations and complexity: Is the data that goes into the process of box-drawing correct? Is the intelligence collection behind that data good or was it gleaned from unreliable sources? Where was human supervision during the process?” — Patrick Tucker (Defense One)
CREATE MEANINGFUL CONNECTIONS AT TECH 2025
OUR AWESOME SPONSOR
The Williamsburg Hotel (in the Library)
Address: 96 Wythe Ave, Brooklyn, NY 11249
Located in prime North Brooklyn, with Manhattan just over the bridge!
Lite food, beverages and wine included.