TL;DR

A reporter tried to reproduce Google’s Gemini ad by feeding the AI photos of her child’s favorite stuffed deer. Gemini can generate replacement-shopping suggestions, images and short videos, but it required lots of prompting, made mistakes, and produced results that raised privacy and ethical concerns.

What happened

The author attempted to replicate Google’s Gemini commercial using three photos of her son’s beloved plush, “Buddy.” She first asked Gemini to “find this stuffed animal to buy ASAP.” The model returned a few candidate matches but produced an extended, roughly 1,800-word internal monologue that flip-flopped between identifying the toy as a dog, rabbit or fawn and ultimately suggested the figure might be a discontinued item sold on sites like eBay. Next, she used other photos and prompts to generate staged images of the toy on a plane, at the Grand Canyon and in front of landmarks; these came out plausible but required careful prompts and source images with clear angles. Gemini also generated short videos — which take minutes to create and are limited to three per day on Gemini Pro — but it refused to make videos based on images that included the child, likely due to safety guardrails. Hearing an AI voice address her son directly made the author uncomfortable, and she declined to show the results to him.

Why it matters

  • AI tools can mimic ad scenarios but demand careful prompting and good source photos to produce believable outputs.
  • Systems may still misidentify objects and produce long, unreliable internal reasoning when asked to 'search' by image.
  • Safety guardrails can block potentially harmful deepfakes of children, but ethical questions remain about using AI to console or deceive kids.
  • Commercial presentations may omit the work and iteration required to achieve the polished results seen in ads.

Key facts

  • The experiment used three different photos of the stuffed animal as input for Gemini.
  • Gemini produced an approximately 1,800-word chain-of-thought when asked to find a replacement.
  • The AI misidentified the toy at times (calling it a puppy or rabbit) before suggesting it might be a discontinued Mary Meyer Putty fawn.
  • Author’s own web searches took about 20 minutes and led to similar conclusions about the toy’s origin.
  • Gemini generated plausible staged images (airplane, Grand Canyon, Eiffel Tower) after repeated prompting.
  • Video generation took minutes per clip and Gemini Pro users are limited to three generated videos per day.
  • Gemini would not create videos from any image that included the child, which the author attributes to guardrails.
  • The author chose not to share the AI-generated clips with her child, citing discomfort with an AI character addressing him by name.
  • All images and videos included in the article were produced using Google Gemini.

What to watch next

  • not confirmed in the source: whether Google will change how it presents Gemini results in ads to show the prompting work behind them.
  • not confirmed in the source: whether companies will add more explicit controls for parents about AI-generated content featuring children or child-related props.
  • not confirmed in the source: potential regulatory or policy responses to ads that imply seamless, instant AI capabilities.

Quick glossary

  • Gemini: Google’s multimodal AI platform that can generate text, images and short videos from prompts and image inputs.
  • Prompting: The act of giving specific instructions or examples to an AI model to guide the output it generates.
  • Guardrails: Safety measures and restrictions built into AI systems to prevent harmful or sensitive outputs, such as deepfakes of minors.
  • Reverse image search: A technique that uses an image to find visually similar images or related information on the web.
  • Deepfake: Synthetic media in which a person in an existing image or video is replaced with someone else's likeness, often created with AI.

Reader FAQ

Could Gemini find an exact replacement toy?
Gemini offered candidate matches and ultimately suggested the toy might be discontinued and to check marketplaces like eBay; it did not produce a guaranteed exact replacement.

Did Gemini produce the same polished results as the commercial without effort?
No. The author needed repeated, careful prompting and the quality of source images affected outcomes.

Would Gemini create videos using photos of the child holding the toy?
No. The model refused to generate videos from images that included the child, which the author attributes to safety guardrails.

Did the author show the AI-generated toy videos to her son?
No. She decided not to show the clips, citing discomfort with the toy speaking directly to her child.

TECH AI REPORT I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t AI can help you make it look like a plush toy…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *