TL;DR

A recent Snowflake software release introduced a backwards-incompatible database schema change that led to version mismatch errors and widespread operational failures. The incident affected 10 of the vendor's 23 regions, interrupted queries and file ingestion, and prompted a promised root cause report within days.

What happened

Snowflake deployed an update that, according to the company’s incident report, introduced a backwards-incompatible change to its database schema. That change caused earlier release packages to reference fields that no longer matched the updated schema, producing version mismatch errors and causing some operations to fail or take much longer than normal. The problem was first logged as SQL execution errors at 02:55 UTC on Tuesday; Snowflake said it identified the issue about 90 minutes later and reported systems were restored by roughly 05:00 UTC. The outage impacted 10 of Snowflake’s 23 global regions, including US Azure (Virginia) and AWS (Oregon), and data centers in Ireland, Zurich, London, Sweden, Mumbai, Singapore and Mexico. The company also acknowledged users saw error messages for a period described as 13 hours. Snowflake said it will publish a root cause analysis within five days after closing the incident and otherwise declined additional comment.

Why it matters

  • Customers experienced interrupted data queries and failed file ingestion, directly impacting analytics and downstream processes.
  • The incident highlights the risk that backwards-incompatible changes in distributed data platforms can have broad, cross-region effects.
  • Repeated service incidents raise questions about release testing, rollout practices and operational resilience for cloud data infrastructure vendors.
  • Widespread outages among major data platforms can affect dependent applications and enterprises, increasing operational and compliance exposure.

Key facts

  • Snowflake says the recent release introduced a backwards-incompatible database schema update.
  • Previous release packages referenced the updated fields, producing version mismatch errors that caused failures or slow operations.
  • Ten of Snowflake’s 23 global regions were affected, including US Azure (Virginia) and AWS (Oregon), plus Ireland, Zurich, London, Sweden, Mumbai, Singapore and Mexico.
  • The issue was first reported at 02:55 UTC; Snowflake stated it identified the problem about 90 minutes later and reported systems were back by roughly 05:00 UTC.
  • The company said some users received error messages for a period described in the report as 13 hours.
  • Snowflake committed to publishing a root cause analysis within five days of closing the incident.
  • This was Snowflake’s second incident within a week, following a December 10 database infrastructure issue that degraded performance for users in the AWS Oregon datacenter.
  • The Register reported a user complaint on Snowflake’s Reddit page describing frustration with the rollback time.
  • Snowflake told The Register it had nothing additional to share at the time of reporting.

What to watch next

  • Snowflake's promised root cause analysis, due within five days after the incident is closed (confirmed in the source).
  • Whether Snowflake will announce changes to its release or rollback procedures — not confirmed in the source.
  • Any formal customer remediation, SLA crediting or contractual notices following the incident — not confirmed in the source.

Quick glossary

  • Backwards-incompatible change: A modification to software or a schema that breaks compatibility with previous versions, causing older clients or packages to fail or behave incorrectly.
  • Database schema: The structure that defines how data is organized in a database, including tables, fields and relationships.
  • Version mismatch error: An error that occurs when components or packages expect different versions of data structures or APIs and cannot interoperate correctly.
  • Cloud region: A geographic area where a cloud provider operates one or more data centers; customers are often routed to a region for data residency, latency or redundancy reasons.

Reader FAQ

How long did the outage last?
The incident was first reported at 02:55 UTC and Snowflake said systems were back around 05:00 UTC; the company also reported users received error messages for a period described as 13 hours.

What caused the outage?
Snowflake reported a recently released backwards-incompatible database schema update that led to version mismatch errors.

Which regions were affected?
Ten of Snowflake’s 23 regions were impacted, including US Azure (Virginia), AWS (Oregon), Ireland, Zurich, London, Sweden, Mumbai, Singapore and Mexico.

Will Snowflake publish a root cause analysis?
Yes — the company said it will release a root cause analysis within five days of closing the incident.

Was customer data lost or corrupted?
Not confirmed in the source.

PAAS + IAAS 10 Snowflake update caused a blizzard of failures worldwide Customers in 10 of the company’s 23 regions had “operations fail or take an extended amount of time…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *