https://proxysql.com/wp-content/uploads/2026/02/Screenshot-2026-02-24-at-11.09.39%E2%80%AFAM-300×173.png
The Problem with “Just Migrate”
Most organizations are sitting on MySQL deployments they can’t easily change — a mix of community editions, managed cloud services, and legacy versions. Teams want RAG pipelines and natural language querying, but adding AI capabilities typically means schema migrations, new vector database infrastructure, dual-write synchronization headaches, and AI logic sprawled across every application layer. The operational cost is real, and the governance risk is worse.
ProxySQL v4.0 takes a different approach: don’t touch your database at all.
The Transparent AI Layer
The core thesis is elegant — put the intelligence at the proxy layer, not in the database or the application. ProxySQL already sits between every client and every MySQL backend, which makes it a natural choke point for centralized governance, auth, auditing, and now AI capabilities. No connection string changes, no schema migrations, no new infrastructure your DBA team has to babysit.
The comparison with app-layer integration is stark. Where app-layer AI means fragmented governance across every service, schema changes, and multiple network hops, the ProxySQL AI layer provides unified query rules, zero schema changes, and a single enforced access path.
What’s Actually in v4.0
The MCP (Model Context Protocol) server is the centerpiece. Running on port 6071 over HTTPS with bearer token auth, it exposes 30+ tools that any MCP-compatible agent — Claude Code, GitHub Copilot, Cursor, Warp — can discover and call via standard JSON-RPC. Tools span schema discovery (list_schemas, list_tables, list_columns), safe read-only execution (run_sql_readonly, explain_sql), and full RAG search capabilities (rag.search_fts, rag.search_vector, rag.search_hybrid).
All MCP requests pass through MCP Query Rules — analogous to ProxySQL’s existing mysql_query_rules — which can allow, block, rewrite, or timeout requests before they ever reach MySQL. This is where you enforce read-only access, prevent data exfiltration, and audit everything agents are doing.
The Autodiscovery system (discovery.run_static) runs a two-phase process: static schema harvesting followed by LLM-enriched metadata including summaries and domain analysis. Everything lands in a local SQLite catalog (mcp_catalog.db) that agents can then search semantically via llm.search.
The NL2SQL workflow builds on this: agents search the catalog for relevant schemas, synthesize or reuse SQL templates, execute safely via run_sql_readonly, and optionally store successful query patterns as templates for future reuse — a continuous learning loop that improves accuracy over time.
What’s Still Coming
The presentation is upfront that this is a prototype showcase. Still on the roadmap: automatic embedding generation (with local or external model options), real-time indexing via MySQL replication/binlog without touching source tables, DISTANCE() SQL semantics for vector search on AI-blind MySQL backends, and additional MCP endpoints for config management, cache inspection, and observability.
The Bottom Line
The proxy layer argument is compelling: it’s operationally mature, protocol-aware, already deployed in front of critical databases, and has a battle-tested policy engine. Adding AI there rather than in application code means one place to enforce rules, one place to audit, and zero changes to the workloads that depend on MySQL stability.
The Code is on the v4.0 branch at github.com/sysown/proxysql, and the Generative AI documentation section at sysown.github.io/proxysql covers the new features.
Download the Bringing GenAI to every MySQL Instance Presentation given by
René Cannaò at preFOSDEM MySQL Belgian Days 2026 in Brussels.

The post Bringing GenAI to Every MySQL Instance: ProxySQL v4.0 appeared first on ProxySQL.
Planet MySQL


