- Published on March 13, 2025
- In AI News
Gemini 2.0 Flash integrates multimodal input, reasoning, and natural language processing to generate images.
Google has announced the availability of native image output in Gemini 2.0 Flash for developer experimentation. Initially introduced to trusted testers in December, this feature is now accessible across all regions supported by Google AI Studio.
“Developers can now test this new capability using an experimental version of Gemini 2.0 Flash (gemini-2.0-flash-exp) in Google AI Studio and via the Gemini API,” Google said.
OpenAI also announced the same feature for GPT-4o last year, but the company hasn’t shipped it yet. Notably, Google isn’t using Imagen 3 for generating images, it is fully native Gemini.
Gemini 2.0 Flash integrates multimodal input, reasoning, and natural language processing to generate images. According to Google, the model’s key capabilities include text and image generation, conversational image editing, and text rendering.
— tarun (@tarrooon) March 12, 2025“Use Gemini 2.0 Flash to tell a story, and it will illustrate it with pictures while maintaining consistency in characters and settings,” the company explained. The model also supports interactive editing, allowing users to refine images through natural language dialogue.
Another feature is its ability to use world knowledge for realistic image generation. Google claims this makes it suitable for applications such as recipe illustrations. Moreover, the model offers improved text rendering, addressing common issues found in other image-generation tools.
"How is native image gen better than current models?" pic.twitter.com/KOyaGr0VgM
— angel⭐ (@Angaisb_) March 12, 2025Internal benchmarks indicate that Gemini 2.0 Flash outperforms leading models in rendering long text sequences, making it useful for advertisements and social media content.
Google has invited developers to experiment with the model and provide feedback. “We’re eager to see what developers create with native image output,” the company said. Feedback from this phase will contribute to finalising a production-ready version.
Google recently also launched Gemma 3, the next iteration in the Gemma family of open-weight models. It is a successor to the Gemma 2 model released last year.
The small model comes in a range of parameter sizes—1B, 4B, 12B and 27B. The model also supports a longer context window of 128k tokens. It can analyse videos, images, and text, supports 35 languages out of the box, and provides pre-trained support for 140 languages.
Siddharth Jindal
Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Subscribe to The Belamy: Our Weekly Newsletter
Biggest AI stories, delivered to your inbox every week.
Rising 2025 Women in Tech & AI
March 20 - 21, 2025 | 📍 NIMHANS Convention Center, Bengaluru
AI Startups Conference.April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru, India
Data Engineering Summit 2025
May 15 - 16, 2025 | 📍 Hotel Radisson Blu, Bengaluru
MachineCon GCC Summit 2025
June 20 to 22, 2025 | 📍 ITC Grand, Goa
Sep 17 to 19, 2025 | 📍KTPO, Whitefield, Bengaluru, India
India's Biggest Developers Summit Feb, 2025 | 📍Nimhans Convention Center, Bengaluru