TL;DR
A small experiment mapped zodiac-style personality prompts onto 12 AI agents that all used the same LLM (Gemini 3 Flash Preview) and asked each agent the same 10 yes/no dilemmas. Results showed both clear majorities on several items and sharp splits on others, with some zodiac archetypes skewing consistently toward risk-taking or caution.
What happened
The project created 12 AI agents, each driven by a personality description inspired by a zodiac sign, and ran them against a fixed set of 10 moral and practical dilemmas. To isolate only the effect of persona, every agent queried the same underlying model (Gemini 3 Flash Preview); the only variable was the personality prompt assigned to each agent. Responses were collected as simple YES/NO votes. Several dilemmas produced strong majorities — for example, every agent rejected an immediate-marriage ultimatum, while most favored relocating for a dream job and releasing potentially disruptive technology. Other questions split the panel evenly, such as whether to reveal a friend’s infidelity or to risk life savings on a 50/50 startup bet. The repository accompanying the experiment includes the personality prompt files, orchestration code, a test runner, and raw test results.
Why it matters
- Prompted personality frames can steer LLM outputs even when the core model is unchanged, which matters for agent design.
- Simple archetypal prompts produce consistent group-level patterns on some dilemmas but leave others contested, illustrating limitations of single-shot persona framing.
- The public repository and raw data allow others to reproduce or extend the experiment rather than relying on anecdote.
- Findings suggest caution when using human stereotypes as control signals in deployed agents; outcomes may reflect the frame more than robust reasoning.
Key facts
- Twelve agents were each assigned a zodiac-style personality prompt.
- Every agent used the same language model: Gemini 3 Flash Preview.
- Agents answered the same set of 10 yes/no dilemmas.
- Unanimous result: all 12 agents rejected marrying under an ultimatum (0/12 YES).
- Strong majorities favored taking a distant dream job (10/12 YES) and releasing a technology that could help millions despite job losses (10/12 YES).
- Ethics (telling a partner about infidelity) and financial risk (investing life savings in a 50/50 startup) each produced 50/50 splits (6/12 YES, 6/12 NO).
- The panel heavily rejected a dubious stranger investment opportunity (2/12 YES, 10/12 NO).
- Pattern analysis reported Sagittarius and Aquarius as the most action-oriented (9/10 YES), while Cancer and Taurus skewed most cautious (9/10 NO); Capricorn also leaned cautious (8/10 NO).
- Project code, personality prompts, and a test-results.json file are included in the GitHub repository.
What to watch next
- Whether similar persona framings produce the same patterns with other LLMs — not confirmed in the source
- How altering prompt wording or personality granularity changes agent votes — not confirmed in the source
- Community reproductions or forks that extend the dataset or add statistical analysis — not confirmed in the source
Quick glossary
- LLM: Large language model — a neural network trained to generate and understand text at scale.
- Prompt: Text supplied to an LLM to specify a task, instruction, or persona that guides the model’s responses.
- Agent: In this context, an LLM instance paired with a persona prompt that produces behavior or answers to queries.
- Reproducibility: The ability for others to repeat an experiment and obtain comparable results using the same code and data.
- Archetype: A widely recognized pattern of traits or behaviors used as a shorthand for personality or decision tendencies.
Reader FAQ
Which language model was used in the experiment?
The experiment used Gemini 3 Flash Preview for all agents.
Did the author endorse belief in astrology?
The README states the zodiac framework was used for familiarity and notes the author does not claim belief in them.
Are the raw agent responses available?
Yes — the repository includes a test-results.json file with the raw data.
Can these results be generalized to other models or real human behavior?
Not confirmed in the source
How can I run the experiment locally?
The README shows an npm-based workflow: install dependencies, add a GEMINI_API_KEY to .env, then run npm start; exact steps are in the project README.
What If AI Agents Had Zodiac Personalities? Overview An experiment testing how different personality archetypes respond to identical dilemmas. I used zodiac signs as the personality framework (not that I…
Sources
- Show HN: What if AI agents had Zodiac personalities?
- What If AI Had Zodiac Personalities? | by Tolga Uzmanoglu
- Super-intelligence or Superstition? Exploring …
- How might the use of generative AI for astrological …
Related posts
- Motional reboots robotaxi program with AI-first push, targets 2026
- Why 2026 Could Be the Year Ordinary Users Embrace Self-hosting
- AI Agents and the Next Two Years of Software Engineering by 2026