Google has officially begun rolling out its highly anticipated screen-sharing feature for Gemini Live, codenamed “Project Astra,” to select Android users. The feature, which was first teased during the 2025 Mobile World Congress (MWC), allows users to share their phone’s screen with Gemini Live and engage in insightful, real-time conversations with the AI about what’s currently on their display. Let’s dive into what this breakthrough means for tech enthusiasts and Gemini subscribers alike.
Google’s Bold Move with Gemini Live Screen Sharing
At MWC 2025, Google confirmed that it was working on an innovative screen and video share capability for its Gemini Live AI. Known internally as “Project Astra,” this feature has been generating considerable buzz ever since. Now, Gemini Live users, specifically those with a Gemini Advanced subscription, are beginning to notice the rollout in the wild.
A Reddit user, who owns a Xiaomi phone with an active Gemini Advanced subscription, shared a video showcasing the functionality. This new tool allows users to share their phone’s screen with Gemini Live and ask questions related to what’s on display—essentially offering an interactive AI assistant that understands the context of your activities.
A New Level of Interaction: How It Works
Integrating screen sharing into Gemini Live goes beyond the traditional AI assistant experience. With this new feature, you can initiate a conversation with Gemini by simply opening an app or website, and the AI will respond based on what it sees on your screen.
For example, suppose you open the Chrome browser and visit a Wikipedia page detailing the Gross Domestic Product (GDP). You can then ask Gemini to summarize the page or explain key economic terms in simple language. What’s even more impressive is that Gemini doesn’t just respond with text—it can offer tailored interactions, such as reading the content aloud, rephrasing it in another language, or even turning it into a melody.
This capability allows for a truly immersive experience, where Gemini serves not just as a tool for answering questions but as an intelligent assistant capable of understanding the full context of what you’re doing on your phone.
Understanding Context: Why Gemini’s Screen Sharing Matters
Gemini’s ability to “see” and comprehend the context of what you’re doing on your phone is a game-changer. Unlike other AI assistants that require explicit queries or command inputs, Gemini’s integration with your screen means that you don’t have to manually type out every question. The AI already knows what you’re looking at, making for a more seamless and intuitive interaction.
This functionality brings a new layer of convenience, particularly for users who may need quick explanations while they’re browsing or working on their phones. Imagine reading an article about a complex topic and being able to immediately ask Gemini to break it down into simpler terms, all without leaving the page.
What’s Included in the Rollout
Currently, the feature is available to Gemini Advanced subscribers, which starts at $19.99 per month. As of now, the rollout is being done gradually, with a select group of users gaining access to this exciting feature. However, Google has promised that more users will be able to experience it soon.
Integrating screen sharing is not just about enhancing AI-powered conversations—it’s about providing a more context-aware, personalized interaction. Whether you’re reading through a research paper, trying to understand a complicated topic, or simply need help navigating a website, Gemini Live with screen sharing can serve as your go-to AI assistant.
What Does This Mean for the Future of AI?
The rollout of Gemini Live’s screen-sharing feature signifies a major step forward in the evolution of AI-assisted technology. By combining real-time visual context with conversational AI, Google is breaking new ground in how we interact with technology. The possibilities for users are endless, from educational support to content creation, and even simple everyday tasks.
Incorporating screen sharing into Gemini’s capabilities enhances the overall user experience, making it not just a tool for answering questions but a comprehensive assistant that can adapt to various contexts and needs.
The future of AI is evolving quickly, and Google’s Gemini Live with its new screen-sharing feature is proof of that. While it’s currently exclusive to Gemini Advanced subscribers, the potential for broader access in the future could revolutionize how we use AI in everyday tasks. Whether you’re a student needing quick explanations or a professional navigating complex data, Gemini is paving the way for a smarter, more intuitive AI experience.
This development underscores how AI can move beyond simple tasks and engage with users in a more dynamic, context-aware manner. As this feature becomes more widely available, it could set the stage for even more advanced AI functionalities that will continue to change the way we interact with technology. Keep an eye on your Android device, as Gemini Live is bringing the future to your fingertips sooner than you think.