Frequently Asked Questions
I've got a problem with my image credits!
Reach out to me on Twitter or send an email to email@example.com
Can I use this model for my own project?
How does this work?
This toonification system is made using deep learning. It uses a deep learning method called Generative Adversarial Networks. In fact it's a rather convoluted combination of StyleGAN2, resolution dependent model interpolation, and pixel2style2pixel.
At some point I'll write up more of the technical details on my blog.
How did you come up with this idea?
This all started from some earlier experiments Doron shared on Twitter that got a lot of interest. For a description of those early experiments see this blog post on self toonification, the idea is the same, but the method is different (and not really suitable for hosting as a webapp).
We then made the original Toonify Yourself which got a lot of attention, and since then have been working to improve, producing Toonify HD.
Fundamentally this is all based on my work in Resolution dependent model interpolation, you can read the paper on Arxiv.
How do I get good results?
The algorithm works best with high resolution images without much noise. Looking straight on to the camera also seem to work best. Something like a corporate headshot tends to work well.
Do you store my photo?
No your image is discarded as soon as you get your result. We don't store it or use it for anything other than producing your Toonification.
My face wasn't found!
We use the open source dlib face detector to find faces, it's designed to pick up frontal faces but isn't perfect.
Where did my glasses go?
Not many characters from animated films wear glasses so the model seems to have learnt to mostly remove them. It also has problems with bald people, hats, and various other things. For the HD model you can try sliding the glasses slider to 5 to restore your spectacles!