A Few Thoughts about Deep Fakes

Someone from the House Permanent Select Committee on Intelligence recently contacted me about a hearing they’re having on the subject of deep fakes. I can’t attend the hearing, but the conversation got me thinking about the subject of deep fakes, and I made a few quick notes….

What You See May Not Be What Happened

The idea of modifying images is as old as photography. At first, it had to be done by hand (sometimes with airbrushing). By the 1990s, it was routinely being done with image manipulation software such as Photoshop. But it’s something of an art to get a convincing result, say for a person inserted into a scene. And if, for example, the lighting or shadows don’t agree, it’s easy to tell that what one has isn’t real.

What about videos? If one does motion capture, and spends enough effort, it’s perfectly possible to get quite convincing results—say for animating aliens, or for putting dead actors into movies. The way this works, at least in a first approximation, is for example to painstakingly pick out the keypoints on one face, and map them onto another.

What’s new in the past couple of years is that this process can basically be automated using machine learning. And, for example, there are now neural nets that are simply trained to do “face swapping”:

Face swap

In essence, what these neural nets do is to fit an internal model to one face, and then apply it to the other. The parameters of the model are in effect learned from looking at lots of real-world scenes, and seeing what’s needed to reproduce them. The current approaches typically use generative adversarial networks (GANs), in which there’s iteration between two networks: one trying to generate a result, and one trying to discriminate that result from a real one.

Today’s examples are far from perfect, and it’s not too hard for a human to tell that something isn’t right. But even just as a result of engineering tweaks and faster computers, there’s been progressive improvement, and there’s no reason to think that within a modest amount of time it won’t be possible to routinely produce human-indistinguishable results.

Can Machine Learning Police Itself?

OK, so maybe a human won’t immediately be able to tell what’s real and what’s not. But why not have a machine do it? Surely there’s some signature of something being “machine generated”. Surely there’s something about a machine-generated image that’s statistically implausible for a real image.

Well, not naturally. Because, in fact, the whole way the machine images are generated is by having models that as faithfully as possible reproduce the “statistics” of real images. Indeed, inside a GAN there’s explicitly a “fake or not” discriminator. And the whole point of the GAN is to iterate until the discriminator can’t tell the difference between what’s being generated, and something real.

Could one find some other feature of an image that the GAN isn’t paying attention to—like whether a face is symmetric enough, or whether writing in the background is readable? Sure. But at this level it’s just an arms race: having identified a feature, one puts it into the model the neural net is using, and then one can’t use that feature to discriminate any more.

There are limitations to this, however. Because there’s a limit to what a typical neural net can learn. Generally, neural nets do well at tasks like image recognition that humans do without thinking. But it’s a different story if one tries to get neural nets to do math, and for example factor numbers.

Imagine that in modifying a video one has to fill in a background that’s showing some elaborate computation—say a mathematical one. Well, then a standard neural net basically doesn’t stand a chance.

Will it be easy to tell that it’s getting it wrong? It could be. If one’s dealing with public-key cryptography, or digital signatures, one can certainly imagine setting things up so that it’s very hard to generate something that is correct, but easy to check whether it is.

But will this kind of thing show up in real images or videos? My own scientific work has actually shown that irreducibly complex computation can be quite ubiquitous even in systems with very simple rules—and presumably in many systems in nature. Watch a splash in water. It takes a complex computation to figure out the details of what’s going to happen. And while a neural net might be able to get something that basically looks like a splash, it’d be vastly harder for it to get the details of a particular splash right.

But even though in the abstract computational irreducibility may be common, we humans, in our evolution and the environments we set up for ourselves, tend to end up doing our best to avoid it. We have shapes with smooth curves. We build things with simple geometries. We try to make things evolvable or understandable.  And it’s this avoidance of computational irreducibility that makes it feasible for neural nets to successfully model things like the visual scenes in which we typically find ourselves.

One can disrupt this, of course. Just put in the picture a display that’s showing some sophisticated computation (even, for example, a cellular automaton). If someone tries to fake some aspect of this with a neural net, it won’t (at least on its own) feasibly be able to get the details right.

I suspect that in the future of human technology—as we mine deeper in the computational universe—irreducible computation will be much more common in what we build. But as of now, it’s still rare in typical human-related situations. And as a result, we can expect that neural nets will successfully be able to model what’s going on well enough to at least fool other neural nets.

How to Know What’s Real

So if there’s no way to analyze the bits in an image to tell if it’s a real photograph, does that mean we just can’t tell? No. Because we can also think about metadata associated with the image—and about the provenance of the image. When was the image created? By whom? And so on.

So let’s say we create an image. How can we set things up so that we can prove when we did it? Well, in modern times it’s actually very easy. We take the image, and compute a cryptographic hash from it (effectively by applying a mathematical operation that derives a number from the bits in the image). Then we take this hash and put it on a blockchain.

The blockchain acts as a permanent ledger. Once we’ve put data on it, it can never be changed, and we can always go back and see what the data was, and when it was added to the blockchain.

This setup lets us prove that the image was created no later than a certain time. If we want to prove that the image wasn’t created earlier, then when we create the hash for the image, we can throw in a hash from the latest block on our favorite blockchain.

OK, but what about knowing who created the image? It takes a bit of cryptographic infrastructure—very similar to what’s done in proving the authenticity of websites. But if one can trust some “certificate authority” then one can associate a digital signature to the image that validates who created it.

But how about knowing where the image was taken? Assuming one has a certain level of access to the device or the software, GPS can be spoofed. If one records enough about the environment when the image was taken, then it gets harder and harder to spoof. What were the nearby Wi-Fi networks? The Bluetooth pings? The temperature? The barometric pressure? The sound level? The accelerometer readings? If one has enough information collected, then it becomes easier to tell if something doesn’t fit.

There are several ways one could do this. Perhaps one could just detect anomalies using machine learning. Or perhaps one could use actual models of how the world works (the path implied by the accelerometer isn’t consistent with the equations of mechanics, etc.). Or one could somehow tie the information to some public computational fact. Was the weather really like that in the place the photo was said to be taken? Why isn’t there a shadow from such-and-such a plane going overhead? Why is what’s playing on the television not what it should be? Etc.

But, OK, even if one just restricts oneself to creation time and creator ID, how can one in practice validate them?

The best scheme seems to be something like how modern browsers handle website security. The browser tries to check the cryptographic signature of the website. If it matches, the browser shows something to say the website is secure; if not, it shows some kind of warning.

So let’s say an image comes with data on its creation time and creator ID. The data could be metadata (say EXIF data), or it could be a watermark imprinted on the detailed bits in the image. Then the image viewer (say in the browser) can check whether the hash on a blockchain agrees with what the data provided by the image implies. If it does, fine. And the image viewer can make the creation time and creator ID available. If not, the image viewer should warn the user that something seems to be wrong.

Exactly the same kind of thing can be done with videos. It just requires video players computing hashes on the video, and comparing to what’s on a blockchain. And by doing this, one can guarantee, for example, that one’s seeing a whole video that was made at a certain time.

How would this work in practice? Probably people often wouldn’t want to see all the raw video taken at some event. But a news organization, for example, could let people click through to it if they wanted. And one can easily imagine digital signature mechanisms that could be used to guarantee that an edited video, for example, contained no content not in certain source videos, and involved, say, specified contiguous chunks from these source videos.

The Path Forward

So, where does this leave us with deep fakes? Machine learning on its own won’t save us. There’s not going to be a pure “fake or not” detector that can run on any image or video. Yes, there’ll be ways to protect oneself against being “faked” by doing things like wearing a live cellular automaton tie. But the real way to combat deep fakes, I think, is to use blockchain technology—and to store on a public ledger cryptographic hashes of both images and sensor data from the environment where the images were acquired. The very presence of a hash can guarantee when an image was acquired; “triangulating” from sensor and other data can give confidence that what one is seeing was something that actually happened in the real world.

Of course, there are lots of technical details to work out. But in time I’d expect image and video viewers could routinely check against blockchains (and “data triangulation computations”), a bit like how web browsers now check security certificates. And today’s “pics or it didn’t happen” will turn into “if it’s not on the blockchain it didn’t happen”.

Stephen Wolfram (2019), "A Few Thoughts about Deep Fakes," Stephen Wolfram Writings. writings.stephenwolfram.com/2019/06/a-few-thoughts-about-deep-fakes.
Text
Stephen Wolfram (2019), "A Few Thoughts about Deep Fakes," Stephen Wolfram Writings. writings.stephenwolfram.com/2019/06/a-few-thoughts-about-deep-fakes.
CMS
Wolfram, Stephen. "A Few Thoughts about Deep Fakes." Stephen Wolfram Writings. June 12, 2019. writings.stephenwolfram.com/2019/06/a-few-thoughts-about-deep-fakes.
APA
Wolfram, S. (2019, June 12). A few thoughts about deep fakes. Stephen Wolfram Writings. writings.stephenwolfram.com/2019/06/a-few-thoughts-about-deep-fakes.

Posted in: Artificial Intelligence, Other

3 comments

  1. Super function idea:

    Deepfake[img, script]

    Potential options:

    Tone : {“Threatening”, “Apathetic”, “Elated”}

    which takes an image of a person (or animal?) and animates them speaking the provided script. With obvious and encrypted metadata saying it was computationally generated. What could go wrong!

    I say this mostly in jest, but maybe we aren’t all too far away from this being a reality! If anyone can do it it is the folks at Wolfram! 🙂

  2. An interesting and informative read as always.

    One comment regarding how a standard neural network could utterly fail at some difficult function such as factoring:

    Recently, neural nets have inspired the notion of an “invertible” logic, where one designs a multiplier that can be operated in reverse, due to a type of network that is reciprocal and energy-conserving.

    These are of course very different from what Stephen Wolfram is talking about, nevertheless could change some of the “computational barriers” that neural networks face in the future.

    For example, see:

    https://journals.aps.org/prx/abstract/10.1103/PhysRevX.7.031014

  3. Very interesting. Maybe people will start selling reactive ties or jewelry / headgear that changes patterns according to what you say. Like a live hash.

    So that if insert new text into your speech, it’ll be different than the hashed signature displayed on your tie. It’ll be highly ironic if this is what finally makes the smart clothing come into mainstream.