Skip to main content

TalkingSchema Copilot - API

TalkingSchema provides both API and embeddable UI — not just programmatic schema generation, but a visual ERD canvas you can embed directly in your application. The same AI-powered engine that generates entity relationship diagrams, runs Plan Mode checklists, and produces framework-specific exports is available as a programmable service for teams building data platforms, developer tools, and AI-native data modeling workflows.

What you get:

  • API — Structured schema outputs (JSON, SQL, Prisma, Drizzle, OpenAPI, etc.) via REST
  • Embeddable iframe — A live-updating ERD canvas that reflects schema changes as the AI copilot applies them. Embed it in internal developer portals, documentation tools, or SaaS products so your users see the current schema without leaving your application.

The API is under development, will be available upon custom request. Contact Contact us to discuss access, pricing, and integration requirements.


Three API Surfaces

1. AI Agent Orchestration

Send natural language schema design requests to TalkingSchema's reasoning engine. Receive structured schema outputs — tables, columns, relationships, constraints — as JSON, SQL DDL, Prisma schema, or any supported export format.

Use case: You are building a data platform onboarding flow. When a new customer describes their business domain, your application calls the TalkingSchema API to generate an initial schema proposal. Pair the API with the embeddable iframe so users can review the schema visually within your UI.

What you provide:

  • Natural language requirements or existing schema context
  • Target dialect (PostgreSQL, MySQL, etc.)
  • Export format preference

What you receive:

  • Structured schema model (JSON)
  • Export in requested format (SQL DDL, Prisma, Drizzle, OpenAPI, etc.)
  • Ordered checklist of changes proposed
  • Diff from previous state (if schema context was provided)

2. Bring Your Own Model (BYOM)

For teams with strict data residency requirements, regulated industries, or existing model contracts, TalkingSchema's BYOM mode separates the AI model from the schema domain logic:

Your Application
↓ schema request
TalkingSchema Domain Layer
— Schema context management
— Plan Mode checklist engine
— Change diff computation
— ERD layout and rendering
↓ structured prompt
Your Model (Azure OpenAI / self-hosted / Anthropic)
↓ structured response
TalkingSchema Domain Layer
— Parse and validate model output
— Apply changes to schema model
— Generate diff and checklist
↓ structured schema output
Your Application

Your schema data and conversation context go directly to your model. TalkingSchema provides the domain intelligence, not the AI infrastructure.

Supported model providers:

  • OpenAI (API key or Azure OpenAI endpoint)
  • Anthropic (API key)
  • Self-hosted models via OpenAI-compatible API (Ollama, vLLM, LM Studio)
  • Any provider with an OpenAI-compatible /v1/chat/completions endpoint

3. Embeddable ERD Canvas (iframe + API)

Embed TalkingSchema's interactive ERD canvas — the same interface used in the main product — directly in your application or internal tool via iframe. The canvas updates live as the AI copilot applies schema changes, so your users always see the current state without manual refresh.

Use case: An internal developer portal where engineers view and propose schema changes for their team's databases. A SaaS product that needs a database design interface without building a canvas from scratch. A data catalog that displays live ERD diagrams synced to your schema source of truth.

What you get:

  • Live-updating iframe — The embedded canvas reflects schema changes in real time as the copilot adds tables, alters columns, or modifies relationships. No polling or manual sync required.
  • Full visual fidelity — Tables, relationships, constraints, and diff overlays render identically to the main TalkingSchema experience.
  • Configurable embedding — Read-only for documentation, or interactive for schema design workflows.

Embedding options:

  • Iframe — Embed a session-scoped canvas with a token (no TalkingSchema account required for your users)
  • JavaScript SDK — Deeper integration with EmbeddableSchemaDAG: custom events, programmatic schema loading, and control over layout and styling
  • Pre-loaded schema — Pass schema data directly so the canvas renders without an API round-trip

Why a Dedicated API Layer

The core insight: schema reasoning ≠ general LLM prompting

Generating a correct, production-safe database schema is not a matter of asking GPT "design a database." It requires:

  • Relational constraint awareness — foreign keys must reference existing primary keys; cascade rules must be explicit
  • Normalization judgment — knowing when to normalize vs. denormalize for the use case
  • Migration safety — understanding which changes require expand-contract sequencing
  • Export format fidelity — mapping schema constructs correctly to Prisma, Drizzle, OpenAPI, and dozens of other target formats
  • Diff correctness — computing the precise structural delta between schema versions

TalkingSchema encodes this domain knowledge in the reasoning layer. Your model provides general intelligence; TalkingSchema provides database-specific correctness.


Use Cases

Use caseAPI surface
Customer onboarding schema generatorAI Agent Orchestration + Embeddable Canvas
Internal developer portal with ERD viewEmbeddable Canvas
Regulated-data platform with private AIBring Your Own Model
Automated schema review in pull requestsAI Agent Orchestration
Multi-tenant SaaS schema template engineAI Agent Orchestration
Data catalog with live ERD syncEmbeddable Canvas + Agent
No-code platform with database builderEmbeddable Canvas

Request API Access

The TalkingSchema API is currently in private access. To apply:

  • Email: Contact us
  • Subject: API Access Request
  • Include: Your use case, expected schema complexity, preferred model provider (if using BYOM), and any data residency requirements

Enterprise agreements with SLA guarantees, dedicated infrastructure, and custom model deployment assistance are available.


Frequently Asked Questions

What is the expected request/response format?

The API uses JSON over HTTPS. Schema models are represented as a versioned JSON document. Exports are returned as strings in the requested format. Checklists are returned as ordered arrays of change objects with type, target, and description fields.

Can TalkingSchema's API be used inside an AI agent workflow (LangChain, LlamaIndex, AutoGen)?

Yes. The AI Agent Orchestration surface is designed to be called as a tool by an orchestrating agent. A schema design tool call would pass requirements as input and receive structured schema output as the tool result.

Is there rate limiting?

Rate limits depend on the access tier. Enterprise agreements include dedicated throughput with guaranteed latency SLAs.

Does the API support streaming responses?

Streaming schema generation (for progressive ERD rendering as the AI produces output) is on the roadmap. Initial API access returns complete responses.

Can I embed the schema in an iframe in my internal UI?

Yes. The Embeddable ERD Canvas surface provides a live-updating canvas that you can embed via iframe or the EmbeddableSchemaDAG JavaScript component. The canvas reflects schema changes as the AI copilot applies them — ideal for internal developer portals, documentation tools, and SaaS products that need database design capabilities without building a canvas from scratch.