this should work on its face because many machine learning algorithms optimize for low Gini coefficient, e.g. a decision tree classifier makes binary splits based off the greatest reduction in Gini; astronomers use something similar to compress the data sent back from space telescope cameras to a reasonable filesize, so if a picture of a face has weird Gini coefficients then it makes sense that it would’ve been AI generated
While the eye-reflection technique offers a potential path for detecting AI-generated images, the method might not work if AI models evolve to incorporate physically accurate eye reflections, perhaps applied as a subsequent step after image generation.
I’m not discouraging AI detection, we will absolutely need it in the future, but we have to acknowledge that AI detection is a cat and mouse game.
this should work on its face because many machine learning algorithms optimize for low Gini coefficient, e.g. a decision tree classifier makes binary splits based off the greatest reduction in Gini; astronomers use something similar to compress the data sent back from space telescope cameras to a reasonable filesize, so if a picture of a face has weird Gini coefficients then it makes sense that it would’ve been AI generated
To quote the article.
I’m not discouraging AI detection, we will absolutely need it in the future, but we have to acknowledge that AI detection is a cat and mouse game.
oh absolutely, it’s specifically a generative adversarial network!