A simple way to fix stereotyped AI images.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Abstract:
      A recent study conducted by researchers at Carnegie Mellon University has found a simple way to make AI image generators more culturally sensitive and accurate. The researchers asked people from underrepresented countries to provide captioned images that better reflect their society, and then used these images to train an AI model called Stable Diffusion. The model was fine-tuned to flag and avoid generating stereotyped images. The study found that the fine-tuned model produced less offensive images between 56 and 63 percent of the time. This research suggests that AI bias can be countered by feeding the model a small number of additional culturally diverse images. [Extracted from the article]