Zero-Downtime Migrations
Schema migrations are the most consequential operations in a production database's lifecycle. A column rename, a new NOT NULL constraint, or an index addition can cause locking, application errors, or data loss if not sequenced correctly. TalkingSchema's AI copilot generates production-grade zero-downtime migration plans with explicit rollback scripts, phase-by-phase deployment guidance, and inline commentary explaining the safety rationale for every step.
The Expand-Contract Pattern
The industry-standard approach to zero-downtime schema migrations, popularized by Martin Fowler and formalized by teams at Facebook, GitHub, and Stripe, is the expand-contract pattern:
Phase 1: EXPAND
Add new structures alongside old ones.
Both old and new application code can run against this schema.
Phase 2: MIGRATE
Deploy new application code that writes to both old and new.
Backfill historical data into new structures.
Verify completeness.
Phase 3: CONTRACT
Remove old structures.
Only new application code runs at this point.
Deploy in a separate release from Phase 1.
TalkingSchema generates all three phases as separate, independently-deployable migration files.
Example: Renaming a Column with Zero Downtime
Scenario: Rename shipments.co2_kg to shipments.emissions_kg in the GSSC schema, while keeping a running production application.
-- ====================================================
-- Phase 1: EXPAND
-- File: V008A__expand_rename_emissions_column.sql
-- Deploy: With or before the application code update
-- ====================================================
BEGIN;
-- Add the new column alongside the old one
ALTER TABLE shipments
ADD COLUMN emissions_kg DECIMAL(10, 3);
-- Add a trigger to dual-write: keep both columns in sync
-- This allows old code (using co2_kg) and new code (using emissions_kg)
-- to run simultaneously during the transition window
CREATE OR REPLACE FUNCTION sync_emissions_columns()
RETURNS TRIGGER AS $$
BEGIN
IF TG_OP = 'INSERT' OR TG_OP = 'UPDATE' THEN
IF NEW.emissions_kg IS NOT NULL THEN
NEW.co2_kg := NEW.emissions_kg;
ELSIF NEW.co2_kg IS NOT NULL THEN
NEW.emissions_kg := NEW.co2_kg;
END IF;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_sync_emissions
BEFORE INSERT OR UPDATE ON shipments
FOR EACH ROW EXECUTE FUNCTION sync_emissions_columns();
COMMIT;
-- ====================================================
-- Phase 2: BACKFILL
-- File: V008B__backfill_emissions_kg.sql
-- Deploy: After Phase 1 is confirmed stable
-- Run in batches for large tables
-- ====================================================
-- Batch backfill to avoid long-running transactions
DO $$
DECLARE
batch_size INT := 10000;
last_id UUID := '00000000-0000-0000-0000-000000000000';
processed INT;
BEGIN
LOOP
WITH batch AS (
SELECT shipment_id FROM shipments
WHERE shipment_id > last_id
AND emissions_kg IS NULL
AND co2_kg IS NOT NULL
ORDER BY shipment_id
LIMIT batch_size
)
UPDATE shipments s
SET emissions_kg = s.co2_kg
FROM batch b
WHERE s.shipment_id = b.shipment_id;
GET DIAGNOSTICS processed = ROW_COUNT;
EXIT WHEN processed = 0;
SELECT MAX(shipment_id) INTO last_id FROM shipments
WHERE emissions_kg IS NOT NULL;
PERFORM pg_sleep(0.1); -- brief pause between batches
END LOOP;
END $$;
-- Verify: check for nulls before proceeding to Phase 3
-- SELECT COUNT(*) FROM shipments WHERE co2_kg IS NOT NULL AND emissions_kg IS NULL;
-- Expected result: 0
-- ====================================================
-- Phase 3: CONTRACT
-- File: V008C__contract_drop_co2_kg.sql
-- Deploy: Only after ALL application instances use emissions_kg
-- ====================================================
BEGIN;
DROP TRIGGER IF EXISTS trg_sync_emissions ON shipments;
DROP FUNCTION IF EXISTS sync_emissions_columns();
ALTER TABLE shipments DROP COLUMN co2_kg;
COMMIT;
-- ====================================================
-- ROLLBACK SCRIPTS
-- ====================================================
-- Rollback Phase 1 (if Phase 2 not yet deployed):
-- BEGIN;
-- DROP TRIGGER IF EXISTS trg_sync_emissions ON shipments;
-- DROP FUNCTION IF EXISTS sync_emissions_columns();
-- ALTER TABLE shipments DROP COLUMN IF EXISTS emissions_kg;
-- COMMIT;
-- Rollback Phase 3 (emergency re-add of co2_kg):
-- BEGIN;
-- ALTER TABLE shipments ADD COLUMN co2_kg DECIMAL(10, 3);
-- UPDATE shipments SET co2_kg = emissions_kg WHERE emissions_kg IS NOT NULL;
-- COMMIT;
Lock-Safe Index Creation
Adding indexes to large tables is a common source of production incidents. TalkingSchema generates lock-free index migrations using CREATE INDEX CONCURRENTLY:
-- Safe index addition on a large table
-- Cannot be run inside a transaction block
CREATE INDEX CONCURRENTLY IF NOT EXISTS
idx_shipments_po_arrived
ON shipments(po_id, arrived_at DESC)
WHERE arrived_at IS NOT NULL;
-- Verify index is valid before using in queries:
-- SELECT indexname, indisvalid
-- FROM pg_indexes
-- JOIN pg_index ON indexrelid = (
-- SELECT oid FROM pg_class WHERE relname = 'shipments'
-- )
-- WHERE indexname = 'idx_shipments_po_arrived';
-- Rollback:
-- DROP INDEX CONCURRENTLY IF EXISTS idx_shipments_po_arrived;
Common Migration Scenarios
| Change | Pattern | Risk |
|---|---|---|
| Add nullable column | Single ALTER TABLE | Low — no lock |
| Add NOT NULL column | Expand (add nullable) → backfill → add constraint | Medium |
| Rename column | Expand (dual-write trigger) → backfill → contract | Medium |
| Change column type (compatible) | Cast with USING clause or expand-contract | Medium–High |
| Drop column | Contract phase only — after app no longer reads it | Medium |
| Add index | CREATE INDEX CONCURRENTLY | Low — no table lock |
| Add unique constraint | Add unique index concurrently → promote to constraint | Medium |
| Add foreign key | Validate existing data first, then add NOT VALID → VALIDATE | Medium |
| Drop table | Contract phase only — after app fully decoupled | High |
Generating Any Migration Scenario
Use the chat panel with your schema open:
Generate a zero-downtime migration plan for the following change:
Add a NOT NULL column 'sustainability_score' DECIMAL(5,2) to the products table.
The column should default to 50.00 for all existing rows.
Include: Phase 1 (expand), Phase 2 (backfill), Phase 3 (constraint addition).
Include rollback scripts for each phase. Target: PostgreSQL 15.
Frequently Asked Questions
Can TalkingSchema generate migrations for MySQL?
Yes. MySQL has different online DDL support per version and storage engine. Specify: "Generate this migration for MySQL 8.0 InnoDB. Use ALGORITHM=INPLACE where supported."
What is a NOT VALID foreign key in PostgreSQL?
Adding REFERENCES ... NOT VALID creates the foreign key constraint without scanning existing rows for violations — avoiding a full table scan lock. A subsequent VALIDATE CONSTRAINT statement validates existing data in a way that requires only a SHARE UPDATE EXCLUSIVE lock (much less disruptive than a full ACCESS EXCLUSIVE). TalkingSchema uses this pattern automatically for foreign key additions on large tables.
How do I handle a migration that must be irreversible?
Some operations — like deleting a column after data migration, or merging two tables — cannot be cleanly reversed. Ask: "Mark this step as irreversible and include a pre-flight checklist: data backup verification, application code deployment confirmation, and feature flag status."