Brett Beasley | November 12, 2019
Dr. Ian Goodfellow was true to his name; he didn’t set out to harm anyone. The young Stanford graduate was completing his Ph.D. at the University of Montreal when one night he ventured out to a pub called Les Trois Brasseurs with friends. After some heated debate and a few pints — he thinks he was drinking the amber — Goodfellow came up with one of the most groundbreaking ideas in the history of artificial intelligence. He went home and immediately began coding. His idea appeared in print in 2014 and Goodfellow went on to write a popular textbook on machine learning. He now thrives in the rarefied air of Silicon Valley, working on AI initiatives for the likes of Google and Apple.
To any average reader, Goodfellow’s research appears innocuous, nothing more than a bundle of jargon bristling with equations, charts and graphs. But today some experts say Goodfellow’s beer-fueled breakthrough at Les Trois Brasseurs poses a threat to the future of democracy as we know it.
Goodfellow christened his creation Generative Adversarial Networks, or GANs for short. It works by pitting two sets of algorithms called neural networks against each other. One acts as a forger, learning from existing files to generate false digital images, audio or video. The other acts as a detective trying to spot the fakes. The two go round and round in an algorithmic game of cops and robbers until the process generates a piece of digital fiction that is indistinguishable from fact. Named for its genesis in “deep learning,” this high-tech type of forgery is called a “deepfake.”
Read more here.