Human face generation from textual description via style mapping and manipulation.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Abstract:
      Text-to-Face generation is an interesting and challenging task with great potential for diverse computer vision ap- plications in public safety domain. There has been very selective work in Text-to-Face synthesis than Text-to-Image due to diverse facial visual attributes and their corresponding descriptions. In this paper, we have proposed a Text-to-Face generative model that can produce high quality and high resolution images from a given textual description. The model is also able to produce a range of diverse images for a given description. In the proposed approach, the encoded text input is mapped to the generator to produce high quality output which is further manipulated to better reflect the described attributes. Apart from diversity (or in addition to diversity), the model is also able to significantly emphasize the facial attributes provided in the description. The applications of the proposed model include criminal investigation, character generation (video games, movies etc.), manipulating facial attributes according to brief textual description, text based style transfer, text based Image retrieval etc. [ABSTRACT FROM AUTHOR]
    • Abstract:
      Copyright of Multimedia Tools & Applications is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)