Google's Gemini models add native video understanding

1 month ago 21

Skip to content

THE DECODER

Artificial Intelligence: News, Business, Research

Gradient Color scheme Color scheme

AI in practice

Mar 16, 2025Mar 16, 2025

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.

Google has integrated native video understanding into its Gemini models, enabling users to analyze YouTube content through Google AI Studio. Simply enter a YouTube video link into your prompt. The system then transcribes the audio and analyzes the video frames at one-second intervals. You can, for example, reference specific timestamps and extract summaries, translations, or visual descriptions. Currently in preview, the feature permits processing up to 8 hours of video per day, with limitations of one public video per request. Gemini Pro processes videos up to two hours in length, while Gemini Flash handles videos up to one hour. The update follows the implementation of native image generation in Gemini.

Video: via Logan Kilpatrick

Ad

Join our community

Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.

Join our community

Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Read Entire Article