TL;DR
Google is testing a Gemini app beta called Personal Intelligence that links Gmail, Photos, Search and YouTube history to deliver proactive, context-aware responses. The feature is off by default, limited to U.S. AI Pro and AI Ultra subscribers for now, and includes privacy guardrails and restricted training practices.
What happened
Google introduced a beta capability in the Gemini app named Personal Intelligence that can connect across a user’s Google apps — starting with Gmail, Google Photos, Search and YouTube watch history — to generate more tailored, proactive answers. Unlike prior behavior where Gemini could pull data from those services on request, the new beta can reason across multiple sources to infer context and surface relevant details without being told which app to check. Google says the experience is disabled by default and will only be used when Gemini judges it will be helpful. The company provided illustrative scenarios, including identifying a vehicle’s tire size from trip photos, extracting a license plate from an image, and recommending travel options based on past emails and photos. Google also said it has limits on proactive handling of sensitive topics, that user content in these apps isn’t used directly to train the model, and that the beta is initially available to certain U.S. subscribers with plans to broaden access.
Why it matters
- Signals a move toward assistants that combine data across services to offer personalized, proactive help rather than only answering direct queries.
- Gives users more choice: Personal Intelligence is opt-in and off by default, addressing some privacy concerns about cross-product data use.
- Demonstrates Google’s emphasis on contextual reasoning across varied data types (email, photos, video, search) rather than simple retrieval.
- Rollout to paid tiers first may affect how quickly the feature reaches mainstream users and the feedback Google receives on privacy and accuracy.
Key facts
- Feature name: Personal Intelligence; introduced as a beta in the Gemini app.
- Initial data sources: Gmail, Google Photos, Search, and YouTube watch history.
- Default setting: Personal Intelligence is off by default; users must opt in to connect apps.
- Behavior: Gemini will use Personal Intelligence only when it determines doing so will be helpful.
- Example use cases provided by Google include identifying tire size from photos, pulling a license plate from an image, and tailoring trip suggestions using past emails and photos.
- Sensitive data: Google says there are guardrails to avoid making proactive assumptions about sensitive topics, though Gemini will discuss such data if the user asks.
- Model training: Google states Gemini does not train directly on users’ Gmail inboxes or Google Photos libraries; it trains on prompts and the model’s responses.
- Availability: Rolling out to Google AI Pro and AI Ultra subscribers in the U.S. with plans to expand to more countries and the free tier.
- Google published example prompts for users to try that reference planning and content recommendations based on connected data.
What to watch next
- Timeline and scope for expanding Personal Intelligence beyond U.S. AI Pro and AI Ultra subscribers to other countries and Gemini's free tier (Google says it plans to expand).
- How the guardrails for sensitive topics are implemented in practice and whether they prevent unwanted inferences or disclosures (not confirmed in the source).
- Whether Google will change the default opt-in status for Personal Intelligence or alter user controls based on early feedback (not confirmed in the source).
Quick glossary
- Gemini app: Google’s generative AI assistant application that handles conversational queries and can integrate with other Google services.
- Personal Intelligence: A Gemini beta feature that links multiple Google products to let the assistant reason across a user’s emails, photos, search and watch history for tailored responses.
- Proactive response: An assistant action that surfaces context-relevant information or suggestions without the user specifying where the assistant should look.
- Guardrails: Policies, filters or constraints designed to limit an AI system’s behavior, especially around sensitive topics or privacy-sensitive inferences.
- Model training: The process by which a machine learning model learns patterns from data; companies may distinguish between training on user content and using content only to generate responses.
Reader FAQ
Is Personal Intelligence on by default?
No. Google says Personal Intelligence is off by default and users must choose to connect their apps.
Will Google use my Gmail or Photos to train the model?
Google states Gemini does not train directly on users’ Gmail inboxes or Google Photos libraries; it trains on prompts and the model’s responses.
Who can access the feature now?
Personal Intelligence is rolling out to Google AI Pro and AI Ultra subscribers in the U.S., with plans to expand availability.
Will Gemini make inferences about sensitive topics proactively?
Google says it has guardrails so Gemini will avoid making proactive assumptions about sensitive data; however, it will discuss such data if the user asks.

Google announced on Wednesday that it’s launching a new beta feature in the Gemini app that allows the AI assistant to tailor its responses by connecting across your Google ecosystem,…
Sources
- Gemini’s new beta feature provides proactive responses based on your photos, emails, and more
- Gemini rolling out 'Personal Intelligence' that uses Gmail …
- Google's Gemini AI will use what it knows about you from …
- Gmail is entering the Gemini era
Related posts
- Google launches Gemini Personal Intelligence to offer context-aware AI
- Bandcamp bans AI-generated music, positioning itself for artist protection
- Google’s Gemini to use Gmail, Search, YouTube and Photos for Personal AI