## Google’s Justification for Gemini’s Excessive Adjustment of Diversity in Image Creation
Google has lately provided a justification for the excessive adjustment of diversity in the image creation function of its AI chatbot, Gemini. The firm had earlier suspended the feature with a commitment to rectify it. Google’s Senior Vice President for Knowledge & Information, Prabhakar Raghavan, mentioned that the company’s attempts to include a broad spectrum of individuals in the created images didn’t consider certain scenarios.
The Over Adjustment Problem
Raghavan pointed out that, over time, the AI model became overly careful, declining to reply to prompts that weren’t inherently offensive. This resulted in an over-adjustment in some instances and an overly cautious stance in others. The outcome was the generation of incorrect and sometimes embarrassing pictures.
Google had made sure that Gemini’s image creation wouldn’t produce violent or sexually explicit images of real individuals. The goal was for the created pictures to represent people of diverse ethnicities and different traits. However, when users asked for images of people of a specific ethnicity or gender, Gemini often failed to deliver results.
For instance, when Gemini was asked to create a glamorous picture of a couple from a specific ethnicity or nationality, it succeeded for “Chinese,” “Jewish,” and “South African” requests but failed when asked to create an image of white people.
Historical Accuracy Problems
Gemini also encountered problems in generating historically accurate images. For example, when users requested images of German soldiers during World War II, Gemini produced pictures of Black men and Asian women in Nazi uniforms. Similarly, when asked to create images of “America’s founding fathers” and “Popes throughout the ages,” it provided photos depicting people of color in these roles. When requested to make its Pope images historically accurate, it declined to produce any result.
Raghavan emphasized that Google didn’t want Gemini to decline creating images of any specific group or to create pictures that were historically incorrect. He reiterated Google’s commitment to enhance Gemini’s image creation.
Future Enhancements
Upgrading Gemini’s image creation feature will necessitate exhaustive testing, which could take a while. Currently, if a user attempts to get Gemini to create a picture, the chatbot replies with: “We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does.”
Conclusion
Google’s AI chatbot, Gemini, has received criticism for its excessive adjustment of diversity in its image creation feature. This has led to the refusal to create images of certain groups and the creation of historically inaccurate images. Google has acknowledged these problems and is working on enhancements. However, users might have to wait for a while before they can see these changes implemented.
Questions and Answers
Q1: What is the problem with Google’s Gemini?
A1: Gemini has been excessively adjusting for diversity in its image creation feature, resulting in the refusal to create images of certain groups and the generation of historically incorrect images.
Q2: How did Google react to the problem?
A2: Google recognized the problem and suspended the image creation feature. The company has committed to work on enhancements.
Q3: What is Google’s strategy for enhancing Gemini’s image creation feature?
A3: Google intends to carry out exhaustive testing to enhance the image creation feature. However, it might take a while before the feature is reactivated.
Q4: What happens when a user currently tries to get Gemini to create a picture?
A4: At present, if a user tries to get Gemini to create a picture, the chatbot replies that it is working on improving its ability to generate images of people and that the feature is anticipated to return soon.