Litterae.eu
Humanities & IT


Face Recognition to detect deepfakes

What if a library such as face_recognition (henceforth FR for short) was also useful for detecting so-called deepfakes? That is to say, in this case, if it could detect faces generated by artificial intelligence that are absolutely plausible, but do not belong to real people?
The premise of the question is as follows: since computers only operate according to numbers and mathematical models, even the products of artificial generative intelligence, however realistic and complex they may be, can only be the result of numerical models or patterns.
On the other hand, since nothing but mathematical patterns underlie the functioning of FR, it should be fairly easy for it to identify patterns that are necessarily more repetitive in computer science than in nature. In other words, and to keep it simple: since FR thinks like a deepfake generator, net of obvious differences in programming and training data, it should have a good chance of recognising whether a face is of a real person or generated by an algorithm.
Said and done.

I have put together four groups of images:
A) 61 different faces generated by artificial intelligence,
B) 69 real faces of celebrities different from each other,
C) 11 deepfakes different from each other and from those in group A,
D) 11 real faces of celebrities different from each other and from those in group B.
Groups C and D served as control groups.
I first broke the engine in by comparing groups A and B: out of 4209 possible combinations, there was only one case in which FR found an artificial face to be that of a celebrity. A false positive, evidently, out of 4209 comparisons makes 0.02%.
At this point it was time to really put FR to the test with the two control groups.
I tested the 11 deepfakes of group C respectively with the fakes of group A and then with the real faces of group B. The comparison between A and C gave 39 matches out of 759 combinations; id est 5.14% of the times FR correctly identified the AI-generated face as such. The comparison between B and C gave 0 matches out of 671 possible combinations, as if to say that FR never took a real face for a software-generated one.
Finally, to have the counter-evidence, I tested the 11 real faces of group D again with A and B. Correctly in both cases FR did not detect any matches.

Which proves that the initial intuition was well-founded: FR is able to detect patterns of faces generated by artificial intelligence. Of course, to a lesser extent than other libraries trained specifically for this purpose: 5.14% means that FR identifies about one deepfake out of 20. Which is still better than what a real observer would probably do.
Furthermore, comparing that 5.14% with the 0.02% of the only one false positive, I deduce another confirmation: yes, deepfakes are generated through fairly repetitive patterns. According to this experiment, 257 times more repetitive than those of nature.


Site designed by litterae.eu. © 2004-2025. All rights reserved.
Info GDPR EU 2016/679: no cookies used, no personal data collected.