LHSC dean weighs in on fake news
Fakery in news and politics is nothing new, but the emergence of so-called “deep fakes” is a truly disturbing development – a nefarious innovation that should worry anyone who cares about the free flow of information in a democracy. The question is, what, if anything, can be done about it?
Deep-fake videos – in which people are seen doing things they never did and speaking words they never spoke – have been popping up more and more frequently. They are increasingly sophisticated. Recent developments in artificial intelligence have given rise to powerful new video software that can create astonishingly realistic footage that is utterly false. Some of the most widely distributed deep fakes have involved pornographic portrayals of celebrities, but the power of deep-fake video has not escaped the notice of political operatives and autocratic regimes. It is only a matter of time before deep fakes will become a serious part of our cultural landscape and a potent tool in the arsenal of rulers and regimes bent on destroying our trust in democratic institutions and in each other.
Even more troubling, the rise in deep fakes comes at a time when trust in traditional media is ebbing away. Even as we have been encouraged by our president to distrust journalists as “enemies of the people,” we have been subjected to more and more outright fakery from non-journalists. The White House itself has made primitive attempts at fakery – using photoshop to slenderize the president in official photos or to extend his arms and hands to more flattering dimensions. In a video distributed by the White House Press office, an interaction between CNN correspondent Jim Acosta and a press intern was altered to subtly speed up the action, making it appear, falsely, that Acosta had struck the intern.
As we approach the 2020 election, it’s fair to assume that deep fakes will be part of the campaign landscape. After all, it was almost 50 years ago that the fake “Canuck letter” (crafted by Richard Nixon’s so-called “dirty tricks” team) torpedoed the candidacy of Democrat Edmund Muskie. In 2004, in what might be called a reverse-fake, CBS News’ 60 Minutes and its star anchorman Dan Rather based a breathless investigation of George W. Bush’s Vietnam draft experience on documents that turned out to be forged. The episode cost Rather his career and CBS News its reputation. Now this flow of dirty printed tricks is poised to move into video.
And therein lies a great danger. “I saw it with my own eyes” has always been a reasonably reliable separator of fact from fiction. We could watch the video of the violence in Charlottesville and make our own judgments about whether there were “fine people on both sides.” We could see the floods in California or the asylum seekers at the southern border and form our own opinions. When the president says his national security team is “misquoted,” we can watch their testimony to Congress on video and draw our own conclusions.
But now, our technological prowess has unleashed artificial intelligence tools that will make it increasingly difficult to ascertain whether that next viral video is real or fake. It will be harder to tell where videos come from and who is behind them. It will be harder to trust that something a politician said or a protester shouted is real. These are vitally important questions which we will be less and less capable of answering.
For now, the only solution is to ask more questions. It is no longer safe to automatically, reflexively trust your own eyes. There will be powerful people, with powerful technology, working hard to trick you. Don’t let them.