Wikipedia:Reference desk/Archives/Computing/2023 May 24

From Wikipedia, the free encyclopedia
Computing desk
< May 23 << Apr | May | Jun >> Current desk >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 24[edit]

State-of-the-art deep fake[edit]

Is it technically possible, today, to produce a fake video in which someone does/says something he actually didn't, and which is completely impossible to debunk even with expert analysis? If not, how far we are from it?

2.42.135.40 (talk) 09:30, 24 May 2023 (UTC)[reply]

It is hard to tell the exact state-of-the-art, since actors – commercial companies and intelligence organizations alike – have reasons not to keep other actors abreast of their actual capabilities. If it is not yet quite possible today, it will be tomorrow.  --Lambiam 16:00, 24 May 2023 (UTC)[reply]
This is looking at the wrong way around. The way it works is that someone says "I have this test that can detect a deep fake." Then, someone else creates their deep fake process to pass that test. Then, someone else says "I have a new deep fake tester." Then, the deep fake process is altered to pass that test. It is a cat and mouse game. You don't make a deep fake process that is more realistic than real videos. You make one that passes all current tests to see if it is real. 97.82.165.112 (talk) 17:12, 24 May 2023 (UTC)[reply]
Clandestine actors will not reveal that they have a process not detected by current tests, so the test developers have nothing to go on for improving the tests. It doesn't have to be more realistic than real videos. The way tests are developed now is by detecting "fingerprints", tell-tale patterns specific to videos generated by known deep fake generators. At the moment these patterns are often still so obvious that you don't need expert analysis. Adversarial machine learning can itself discover such tell-tale patterns probably better than human experts can, and use this to avoid them. It does not have to be perfect; eventually but inevitably increasingly more powerful tests that produce hardly any false negatives will also produce false positives, and once that becomes an appreciable fraction it is game over. Some relief may be offered by attaching an unforgeable digital chain of provenance.  --Lambiam 19:18, 24 May 2023 (UTC)[reply]
You might look up whether Bruce Schneier has written on this question. —Tamfang (talk) 18:00, 26 May 2023 (UTC)[reply]
Here are a couple of Schneier's articles on the topic:
Detecting fake videos
Detecting deep fake videos by detecting evidence of human blood circulation
In the latter, Scheier notes "Of course, this is an arms race. I expect deep fake programs to become good enough to fool FakeCatcher in a few months." CodeTalker (talk) 19:15, 26 May 2023 (UTC)[reply]