Skip to main content

Schema Migration Overview

TalkingSchema treats schema migration not as an afterthought but as a first-class workflow. Every ERD change produces a traceable diff between the previous and current state. The AI copilot converts that diff into actionable migration SQL — ordered correctly to respect foreign key dependencies, safe for zero-downtime execution, and accompanied by a rollback plan that undoes every change in reverse sequence.


Topics in this section

TopicDescription
Schema DiffVisual comparison of before/after schema states on the ERD canvas
Migration PlanZero-downtime migration plans with expand-contract patterns and rollback scripts
CI/CD MigrationsIntegrate Flyway, Liquibase, or raw SQL into your deployment pipeline

What Is a Schema Migration?

A schema migration is a versioned, repeatable script that transitions a database from one structural state to another — adding tables, altering columns, creating indexes, or dropping constraints. In any team-based engineering workflow, schema changes are among the highest-risk operations: they affect running applications, often require careful coordination across releases, and can be irreversible if not planned precisely.

A well-structured migration:

  • Is reversible — every forward step has a corresponding rollback
  • Is ordered — respects foreign key constraints (you cannot drop a referenced table before removing the reference)
  • Is safe for production — uses online DDL patterns (such as adding a nullable column before backfilling, or creating an index CONCURRENTLY) to avoid locking
  • Is documented — describes the intent of the change, not just the SQL

TalkingSchema's AI copilot generates migrations that satisfy all four criteria, for any complexity of schema change.


Getting started with migrations

  1. Import your current schema — Connect to a live database (Supabase, Neon), upload SQL, or paste DDL. This becomes the "before" state.
  2. Describe or apply changes — Use the AI copilot or canvas to add tables, alter columns, or modify relationships. Plan Mode lets you review every proposed change before execution.
  3. Request a migration — Ask: "Generate a zero-downtime migration plan from the current schema to the target state. Include rollback SQL."
  4. Export to your tool — Get output in Flyway, Liquibase, Alembic, or raw SQL. See CI/CD integration →

Types of Migrations TalkingSchema Can Generate

Additive migrations (lowest risk)

Add new tables, columns, indexes, or enum values without altering existing structures.

-- Generated by TalkingSchema: Add carbon_offset_credits table
-- Migration: V007__add_carbon_offset_credits.sql

BEGIN;

CREATE TABLE carbon_offset_credits (
credit_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
supplier_id UUID NOT NULL REFERENCES suppliers(supplier_id),
credit_date DATE NOT NULL,
volume_tco2e DECIMAL(12, 4) NOT NULL CHECK (volume_tco2e > 0),
registry VARCHAR(100) NOT NULL, -- e.g. 'Gold Standard', 'VCS'
certificate_ref VARCHAR(100) UNIQUE,
created_at TIMESTAMPTZ DEFAULT now()
);

CREATE INDEX idx_carbon_credits_supplier ON carbon_offset_credits(supplier_id);
CREATE INDEX idx_carbon_credits_date ON carbon_offset_credits(credit_date);

COMMIT;

-- Rollback:
-- DROP TABLE IF EXISTS carbon_offset_credits;

Column alterations

Rename, retype, or change nullability of existing columns — with backfill logic when required.

-- Generated by TalkingSchema: Rename co2_kg → emissions_kg, add unit column
-- Migration: V008__rename_emissions_column.sql

BEGIN;

-- Step 1: Add new column (nullable to avoid locking)
ALTER TABLE shipments ADD COLUMN emissions_kg DECIMAL(10, 3);
ALTER TABLE shipments ADD COLUMN emissions_unit VARCHAR(20) DEFAULT 'kg_co2';

-- Step 2: Backfill from existing data
UPDATE shipments SET emissions_kg = co2_kg WHERE co2_kg IS NOT NULL;

-- Step 3: Apply NOT NULL constraint after backfill
-- (Only safe after verifying backfill completeness)
-- ALTER TABLE shipments ALTER COLUMN emissions_kg SET NOT NULL;

-- Step 4: Drop old column in a subsequent migration after application
-- is confirmed to no longer reference co2_kg
-- ALTER TABLE shipments DROP COLUMN co2_kg;

COMMIT;

-- Rollback:
-- BEGIN;
-- UPDATE shipments SET co2_kg = emissions_kg WHERE emissions_kg IS NOT NULL;
-- ALTER TABLE shipments DROP COLUMN IF EXISTS emissions_kg;
-- ALTER TABLE shipments DROP COLUMN IF EXISTS emissions_unit;
-- COMMIT;

Index management

Add or drop indexes — using CONCURRENTLY for PostgreSQL to avoid table locks on large tables.

-- Generated by TalkingSchema: Add covering index for order status queries
-- Migration: V009__add_order_status_indexes.sql

-- Note: CONCURRENTLY cannot run inside a transaction block
-- Run these statements individually, not wrapped in BEGIN/COMMIT

CREATE INDEX CONCURRENTLY IF NOT EXISTS
idx_purchase_orders_status_created
ON purchase_orders(status, created_at DESC)
WHERE status != 'cancelled';

CREATE INDEX CONCURRENTLY IF NOT EXISTS
idx_sales_orders_customer_status
ON sales_orders(customer_id, status);

-- Rollback:
-- DROP INDEX CONCURRENTLY IF EXISTS idx_purchase_orders_status_created;
-- DROP INDEX CONCURRENTLY IF EXISTS idx_sales_orders_customer_status;

Exporting Migrations in Framework Formats

Ask the AI copilot to generate migrations in your preferred format. See CI/CD integration for pipeline setup →

FormatPrompt example
Flyway"Generate this migration as a Flyway versioned migration file. Use filename format V<version>__<description>.sql. Include a comment block at the top with migration date, author, description, and rollback instructions."
Liquibase"Export this migration as a Liquibase XML changeSet. Use sequential changeset IDs. Include rollback elements for each change."
Alembic_"Generate an Alembic migration for these schema changes. Include upgrade() and downgrade() functions with the correct op._ calls."*
Raw SQL"Generate an ordered SQL migration script. Start with the safest changes (additive), end with destructive changes (drops). Wrap in a transaction where possible. Add COMMIT/ROLLBACK points."

How TalkingSchema Knows What Changed

When you make changes to your schema through the AI copilot or canvas, TalkingSchema maintains a before and after state of the schema model. This diff is:

  • Visualized on the ERD canvas (green additions, amber modifications, red removals)
  • Used by the AI to generate precise, dependency-ordered migration SQL
  • Available for review before any export

Learn how the schema diff works →

You can also describe a hypothetical migration without applying ERD changes: "What migration would be needed to add row-level security to the suppliers table?"


Frequently Asked Questions

Can TalkingSchema generate migrations for multi-database schemas?

Yes. Specify the target dialect in your prompt: "Generate this migration for both PostgreSQL and MySQL. Note any syntax differences."

How does the AI handle column renames safely?

Column renames are generated as a three-step expand-contract pattern: (1) add the new column, (2) backfill it from the old column, (3) drop the old column in a separate migration after the application no longer references it. This avoids downtime from simultaneous application and database deployments.

What if I need to migrate data as well as schema?

Ask explicitly: "This migration adds a new status column. Generate a data migration that sets status = 'active' for all existing rows where deleted_at IS NULL." The AI separates schema and data migration steps for clarity and testability.

Does this replace dedicated tools like Flyway or Atlas?

No. TalkingSchema generates the migration SQL; Flyway, Liquibase, Atlas, or psql execute it. Think of TalkingSchema as the authoring layer — the tool that ensures your migration is correct and complete before it enters your version control system.