This month, we’ve been thinking about how influential tech has become to public discourse and even the course of history (see the Capital breach of January 6th). We’ve been asking Americans who lived through the 60’s whether that period felt as tumultuous and uncertain as the last few years ; after all, that decade saw the assassination of both the nation’s sitting president and its most revered civil rights leader; the first humans in space; the Cuban Missile Crisis; the March on Washington; the passage of the Civil Rights Act; the Vietnam War and the draft; the Beatles, the Rolling Stones, and Woodstock. All in just 10 years!
Back then, all Americans learned about President Kennedy’s assassination in the same way (via the handful of nationally-syndicated radio and TV stations), connecting everyone to a common and shared experience.
The soldiers who fought in Vietnam could not live-stream My Lai.
Now we witness events in realtime, from multiple perspectives and viewpoints, some of which have hidden incentive structures or agendas. Two people can watch different footage, hear different “facts”, and come away with diametrically opposed viewpoints of the same event.
Humans aren’t wired to thrive - or even survive - in such an environment. We have complex brains, but they’re designed to interpret information from multiple simultaneous sources (ie our senses) in order to make simple, split-second decisions: is that bush moving because the wind changed or because there’s a lion in it? Should I fight, flee or freeze?
What if *every bush* starts behaving as though there might be a lion in it? How does the brain determine which threats are real and which imagined or simulated?
Our information economy, driven so heavily by tech giants, outputs too much cognitive load for the human brain to intelligently decipher the data, much less react responsibly. Add in a global pandemic and you end up with an entire country (or world) of scared, angry, exhausted people who are even less equipped to tackle the heavy burden of determining fact from fiction.
Duh, you say. So what can we do about it?
Support and elect legislators who have a deeper understanding of how these technologies work (remember that time Orrin Hatch asked Mark Zuckerberg how his business model was sustainable if he didn’t charge for the service? *Cringe*)
Create robust federal and state laws to prosecute and penalize both those who create false or misleading content and those who disseminate it (including media companies)
Establish more independent/government oversight of tech and media companies, similar to the FCC for traditional media companies and the FDA for drug companies; this new entity must be staffed by highly competent, knowledgeable technologists who understand the workings of the tech they are paid to oversee
Work to create more failsafe technologies that will detect and root out misinformation, such as tools that will prevent manipulated photos or videos from being published - potentially using AI to sort fact from fiction and help present less biased, fact-driven information to all
At Prepare we love to focus on the *incredible* ways that tech is improving lives (like detecting breast cancer or uncovering ancient civilizations!), but we will not shy away from the ways it causes harm. We’ll be talking about these dichotomies in the coming months. And we’d love to hear from YOU on how we can enrich your world and your experience per awareness of, access to, and collaboration around 4th-industrial revolution technology. Have an idea? Email us at info@prepare.ai!
Yours,
Fully (AKA Cindy) Teasdale, Executive Director