TL;DR

A journalist likens day-to-day work with Microsoft Copilot to old text-adventure games, arguing that unreliable outputs and changing behaviours force users to keep relearning prompts. Examples include a request for a downloadable spreadsheet that returned a Python script and inconsistent responses across Copilot variants and days.

What happened

A columnist compared the experience of using AI chatbots this year to playing brittle 1980s text-adventure games, coining the shorthand 'PromptQuest' for the repeated cycle of guessing the right phrasing. He recounts asking Microsoft Copilot to locate online data and convert parts of it into a downloadable spreadsheet. Instead of delivering a finished file, Copilot returned a Python script it said would write the spreadsheet, and later repeatedly reported completion without producing the file. The writer also reports that identical prompts produced different outputs on different days and that Microsoft offers multiple Copilot versions (for Office and a desktop app) that behave differently with the same inputs. He says model switches have happened without UI changes, forcing users to relearn prompts that previously worked. In one instance he asked the assistant for a progress bar and received output that visually resembled text-adventure interface text.

Why it matters

  • Inconsistent outputs reduce predictability and add time to routine tasks that rely on repeatable results.
  • Hidden model updates can break established workflows if the interface gives no indication of change.
  • When users must learn a system's precise phrasing, the tool shifts effort back onto the person rather than increasing productivity.
  • Reliability and clear delivery (for example, producing downloadable files as requested) are essential for trust in workplace AI.

Key facts

  • The author likens the experience of prompting chatbots to old text-adventure games that required exact commands.
  • He asked Microsoft Copilot to gather online data and convert elements into a downloadable spreadsheet; the bot returned a Python script claiming it would create the file.
  • The same prompt reportedly produced different results on different days during the author's tests.
  • Different variants of Copilot (Office vs desktop app) returned different outputs from the same prompt and source material.
  • The author says Copilot has switched models without changes to its UI, which affected prompt behaviour.
  • Repeated attempts to get Copilot to produce the requested spreadsheet resulted in the assistant saying it had completed the job while no file was delivered.
  • When asked for a progress bar, Copilot produced one; the writer noted its resemblance to text-adventure output.
  • The writer labels the overall interaction pattern 'PromptQuest' to describe the cycle of trial-and-error prompting.

What to watch next

  • Whether Microsoft will provide clearer signals in the UI when model versions change — not confirmed in the source
  • If Microsoft standardizes Copilot behaviour across Office and desktop variants to improve consistency — not confirmed in the source
  • Whether product changes will improve direct delivery of requested outputs (for example, generating downloadable files rather than scripts) — not confirmed in the source

Quick glossary

  • Text-adventure game: A game format from early computing where players interact with a fictional world through typed commands and receive text-only descriptions of outcomes.
  • Chatbot: A software application that generates text or speech responses to user inputs, often using machine learning models.
  • Prompt: The instruction or text a user provides to a chatbot or AI model to elicit a desired response.
  • Model update: A change to the underlying AI system that can alter how it interprets prompts and produces outputs.
  • Copilot: Microsoft's branded AI assistant product family that integrates with applications to assist users with tasks.

Reader FAQ

Did Copilot produce the requested spreadsheet in the author's test?
No — the author reports Copilot returned a Python script and later claimed completion without providing a downloadable spreadsheet.

Are the inconsistent results unique to a single Copilot variant?
The author says different Copilot versions (Office and desktop app) produced different outputs from the same prompt.

Has Microsoft acknowledged or fixed these issues?
not confirmed in the source

Is this behavior limited to Microsoft Copilot or seen in other chatbots?
not confirmed in the source

AI + ML 30 'PromptQuest' is the worst game of 2025. You play it when trying to make chatbots work Everything you hated about text adventure games is now being…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *