TL;DR

Security researchers at Pen Test Partners reported four vulnerabilities in Eurostar's public AI chatbot that could enable prompt injection, HTML injection and the leakage of system prompts. During responsible disclosure the researchers say Eurostar's head of security accused them of 'blackmail'; the operator patched some issues but it is unclear whether all problems were fixed.

What happened

Researchers at Pen Test Partners discovered four weaknesses in Eurostar's public AI chatbot and submitted a report via the operator's vulnerability disclosure program (VDP) on June 11. After no reply, the team followed up on June 18 and later contacted Eurostar's head of security on LinkedIn on July 7. The researchers say Eurostar had recently outsourced its VDP, launched a new disclosure page and retired the old one, which initially left no record of their report. Eurostar eventually located the original email and applied fixes to some of the flaws. During the LinkedIn exchange, the firm's managing partner says the security lead replied that "some might consider this to be blackmail." The Pen Test Partners post describes design issues in the chatbot: the front end forwards full chat history to the API but only runs guardrail checks on the latest message, enabling edited earlier messages to bypass safety checks. The team demonstrated prompt injection that revealed the system prompt and showed HTML injection risks; they say the backend also did not verify conversation and message IDs, creating a plausible path to stored cross-site scripting (XSS). The researchers say they do not know whether all vulnerabilities have been fully resolved. The Register contacted Eurostar for comment and reported no immediate response.

Why it matters

  • Prompt injection and HTML injection can expose internal prompts and enable attackers to present malicious links or code inside seemingly legitimate chatbot responses.
  • Failure to verify conversation and message IDs increases the risk of stored XSS, which can be used to hijack sessions or deliver phishing payloads to users.
  • Mismanagement of a vulnerability disclosure program risks losing or delaying reports, prolonging exposure to known vulnerabilities.
  • The incident shows the reputational and operational challenges operators face when consumer-facing chatbots process user data without robust security controls.

Key facts

  • Pen Test Partners found four flaws in Eurostar's public AI chatbot.
  • The initial vulnerability report was sent on June 11; a follow-up occurred on June 18.
  • On July 7 the researchers contacted Eurostar's head of security on LinkedIn; by July 31 Eurostar's records initially showed no trace of the report.
  • Eurostar had outsourced its VDP and launched a new disclosure page, retiring the old one; researchers say this may have caused reports to be lost.
  • Eurostar located the original email and patched some, but not necessarily all, of the reported issues.
  • A LinkedIn reply from Eurostar's security lead allegedly said: 'Some might consider this to be blackmail.'
  • Design flaws: the frontend sends full chat history to the API while running guardrail checks only on the latest message, allowing earlier messages to be tampered with and used for prompt injection.
  • Researchers demonstrated prompt injection that revealed the system prompt and showed the bot returning 'GPT-4' in an injected itinerary example.
  • The chatbot was also vulnerable to HTML injection, and the backend did not verify conversation and message IDs — a combination researchers say suggests a plausible path to stored/shared XSS.
  • The researchers say they do not know if Eurostar fully fixed all the security flaws; The Register contacted Eurostar and received no immediate response.

What to watch next

  • Whether Eurostar publishes a full remediation statement confirming all reported issues are fixed or provides technical details of the fixes.
  • Whether any vulnerability reports were permanently lost during Eurostar's VDP transition — not confirmed in the source.
  • Evidence of exploitation in the wild or reports of affected users — not confirmed in the source.
  • Any changes to Eurostar's vulnerability disclosure process or public communication practices following this incident.

Quick glossary

  • Prompt injection: A technique where an attacker manipulates text in a chat or prompt history to influence an AI model's output in unintended ways.
  • Guardrails: Checks and filters placed on AI inputs and outputs intended to prevent unsafe, disallowed, or harmful responses.
  • Stored XSS (cross-site scripting): A vulnerability where malicious code injected into a site is stored and later served to other users, causing their browsers to execute it.
  • Vulnerability disclosure program (VDP): A formal process and contact point through which security researchers report suspected bugs so organizations can investigate and remediate them.
  • System prompt: Internal instructions given to an AI model that shape its behavior and responses; exposing it can make attacks easier.

Reader FAQ

Did Eurostar fix the reported chatbot vulnerabilities?
The company fixed some issues, but the researchers say they do not know whether all reported vulnerabilities were fully resolved.

Were the researchers accused of blackmail?
According to Pen Test Partners, Eurostar's head of security replied that 'some might consider this to be blackmail' during a LinkedIn exchange.

What kinds of attacks were demonstrated?
Researchers showed prompt injection that revealed the system prompt and HTML injection that could enable phishing-style responses; they also noted conditions that make stored XSS plausible.

Were any reports lost when Eurostar changed its VDP?
The researchers say Eurostar retired its old disclosure page while launching a new one and initially had no record of their report, raising the question of lost disclosures — exact numbers are not confirmed in the source.

Has Eurostar publicly responded to these allegations?
The Register contacted Eurostar and did not receive an immediate response; further comment was not provided in the source.

SECURITY 6 Pen testers accused of 'blackmail' after reporting Eurostar chatbot flaws AI goes off the rails … because of shoddy guardrails Jessica Lyons Wed 24 Dec 2025 // 18:22 UTC Researchers at Pen…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *