Member-only story

How long before my computer can hallucinate an entire movie?

Adam Mackay
3 min readFeb 19, 2018

--

2018 is the year the 'deep fake' trend started. Photoshop is so last year, the kids nowadays use a machine learning tool to automatically replace one person’s face with another in a video. Obviously, since this is the internet age, people are using it for two things: fake celebrity porn and inserting Nicolas Cage into random movies.

The Deepfake algorithm is able to digitally hallucinate realistic video.

At the moment the process is simple but processor intensive... it can take several days to generate a short clip of plausible footage at low resolution. The results which are pornographic in nature have been banned by the big name websites. And so the media hoohaa has started to die down. But the can of worms is open. They are wriggling out slowly.

This machine learning system makes use of multiple open-source libraries (Keras with TensorFlow back-end). Its creator trained a neural network to reconstruct a photo realistic human face from a distorted input image. After several hours of training on many distorted images of a particular persons face, it becomes possible to swap out the distorted input image and replace it with a non-distorted version of a different face. The algorithm then 'corrects' the new face to more closely resemble the imagery it was trained on. The results are…

--

--

Adam Mackay
Adam Mackay

Written by Adam Mackay

AI researcher and author with 20 years in safety-critical systems. Exploring the fusion of AI and physical world. Charting the future of cyber-physical systems.

Responses (2)