Most of the previous works in automatic caricature generation follow a very similar scheme. Once the face features (eyes, mouth, etc.) have been detected they are compared to what is considered a mean face, that is taken as standard. The most significant features are extracted from this comparison, and they are exaggerated by deforming the photograph, so they achieve the caricaturization effect. To make the picture look like if it had been painted is very common to apply edge detection algorithms, to make the image look like it had been painted with simple lines.
The major problem of these methods is that they cannot follow a particular style of a cartoonist because they have not been “taught” how to make the caricature. As we have seen before, if we want to imitate any style we have to focus in two main things: the painting style and the way of emphasizing the features. To do this, a set of photographs and its corresponding caricatures with the facial features located is needed. By using automatic learning algorithms, the system learns how the artist transforms each of the features. This solves the problem of which features it must exaggerate and how to do it, but it is necessary to imitate the painting style. To do this, different parts of different caricatures are used, selecting for each feature the most suitable patch and deforming it to fit what the system has learned, and this way it can obtain the caricature as if it were a collage.
In such a creative field, automatic caricatures cannot be compared with those made by a cartoonist, but it can be useful for some applications. In Gradiant we are working in a project for automatic book personalization for infantile books. The motivation of this project is to be able to integrate the face of the children into the illustrations of the book keeping the book style. This makes the personalization move a step forward, allowing the children to be completely into the story.