Documentation
Everything you need to install, configure, and operate SynergyDB in production. This guide covers all 16+ wire protocols, 5 data models (relational, document, graph, vector, time-series), AI/embedding integration, migration workflows, security hardening, and performance tuning.
Getting Started
SynergyDB is an AI-native universal database written in Rust. It ships as a single binary and an official Docker image. A typical deployment exposes 16+ protocol ports simultaneously, each handling the corresponding wire protocol natively. All protocols read from and write to the same underlying LSM-tree storage engine, so data written through one protocol is instantly visible through any other. SynergyDB supports 5 data models: relational, document, graph, vector, and time-series.
Installation
Choose the method that fits your environment. Docker is recommended for evaluation and development; the binary install is preferred for bare-metal production deployments where you need fine-grained control over systemd units, file descriptors, and kernel parameters.
Docker (recommended)
"color:#64748b"># Docker (recommended)
docker run -d --name synergydb \
-p 5432:5432 \
-p 3306:3306 \
-p 27017:27017 \
-p 6379:6379 \
-p 7687:7687 \
-p 8080:8080 \
-p 9090:9090 \
synergytechnologies/synergydb:latestThe image defaults to the bundled configuration. Mount a custom config file to override any setting:
docker run -d --name synergydb \
-v /etc/synergydb/config.yaml:/etc/synergydb/config.yaml \
-p 5432:5432 \
-p 3306:3306 \
-p 27017:27017 \
-p 6379:6379 \
-p 7687:7687 \
-p 8080:8080 \
-p 9090:9090 \
synergytechnologies/synergydb:latestBinary Install
"color:#64748b"># Binary
curl -fsSL https:"color:#64748b">//get.synergydb.io | sh
synergydb start --config /etc/synergydb/config.yamlPackage Managers
"color:#64748b"># macOS
brew install synergytechnologies/tap/synergydb
"color:#64748b"># Debian / Ubuntu
curl -fsSL https:"color:#64748b">//repo.synergydb.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/synergydb.gpg
echo "deb [signed-by=/usr/share/keyrings/synergydb.gpg] https:">//repo.synergydb.io/apt stable main" \
| sudo tee /etc/apt/sources.list.d/synergydb.list
sudo apt update && sudo apt install synergydb
"color:#64748b"># RHEL / Fedora
sudo dnf install -y https:"color:#64748b">//repo.synergydb.io/rpm/synergydb-latest-1.x86_64.rpmConfiguration
All SynergyDB settings live in a single YAML file, typically at /etc/synergydb/config.yaml. Every option can also be set via environment variables using the prefix SYNERGYDB_ with underscores replacing dots (e.g., SYNERGYDB_STORAGE_DATA_DIR).
"color:#64748b"># SynergyDB Configuration
"color:#64748b"># /etc/synergydb/config.yaml
cluster:
node_id: node-1
advertise_addr: 10.0.1.10:9400
peers:
- 10.0.1.11:9400
- 10.0.1.12:9400
"color:#64748b"># Protocol listeners
protocols:
postgresql:
enabled: true
host: 0.0.0.0
port: 5432
max_connections: 500
ssl: true
ssl_cert: /etc/synergydb/certs/server.crt
ssl_key: /etc/synergydb/certs/server.key
mysql:
enabled: true
host: 0.0.0.0
port: 3306
max_connections: 500
ssl: true
mongodb:
enabled: true
host: 0.0.0.0
port: 27017
max_connections: 1000
ssl: true
redis:
enabled: true
host: 0.0.0.0
port: 6379
max_connections: 10000
requirepass: ''
bolt:
enabled: true
host: 0.0.0.0
port: 7687
max_connections: 500
http:
enabled: true
host: 0.0.0.0
port: 8080
max_connections: 1000
grpc:
enabled: true
host: 0.0.0.0
port: 9090
max_connections: 1000
"color:#64748b"># Vector index
vector:
default_index: hnsw
hot_threshold_secs: 3600
warm_threshold_secs: 604800
"color:#64748b"># Storage engine
storage:
engine: synergy "color:#64748b"># synergy | rocksdb | hybrid
data_dir: /var/lib/synergydb/data
wal_dir: /var/lib/synergydb/wal
max_wal_size: 4GB
compaction_style: leveled
bloom_filter_bits: 10
block_cache_size: 8GB
write_buffer_size: 256MB
max_write_buffer_number: 4
"color:#64748b"># Replication
replication:
mode: synchronous "color:#64748b"># synchronous | asynchronous | quorum
quorum_size: 2
wal_shipping: true
snapshot_interval: 6h
max_lag_bytes: 67108864 "color:#64748b"># 64MB
"color:#64748b"># Security
security:
authentication: scram-sha-256 "color:#64748b"># scram-sha-256 | ldap | oidc
encryption_at_rest: true
encryption_algorithm: AES-256-GCM
master_key_provider: vault "color:#64748b"># vault | aws-kms | gcp-kms | local
vault_addr: https:"color:#64748b">//vault.internal:8200
vault_path: secret/synergydb/master-key
audit_log:
enabled: true
path: /var/log/synergydb/audit.log
format: json
include_queries: true
tls:
min_version: '1.3'
cipher_suites:
- TLS_AES_256_GCM_SHA384
- TLS_CHACHA20_POLY1305_SHA256
"color:#64748b"># Observability
telemetry:
metrics:
enabled: true
endpoint: /metrics
port: 9090
tracing:
enabled: true
exporter: otlp
endpoint: http:"color:#64748b">//otel-collector:4317
logging:
level: info "color:#64748b"># debug | info | warn | error
format: json
output: /var/log/synergydb/synergydb.logsynergydb config validate --config /etc/synergydb/config.yaml to check your configuration before starting the server. The validator catches type errors, port conflicts, and invalid TLS certificate paths.Quick Start
Once SynergyDB is running, you can connect using the native client for any supported protocol. Every protocol accesses the same storage layer, so a table created through PostgreSQL is immediately queryable through MySQL, visible as a collection in MongoDB, traversable as a graph via Neo4j Bolt, and accessible as a key namespace in Redis. Vector and full-text indexes are available across all protocols.
Connect via PostgreSQL
psql -h localhost -p 5432 -U admin -d synergydbCREATE TABLE users (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
INSERT INTO users (name, email) VALUES
('Alice Chen', 'alice@example.com'),
('Bob Martinez', 'bob@example.com'),
('Carol Johnson', 'carol@example.com');
SELECT * FROM users WHERE name LIKE 'A%';Connect via MySQL
mysql -h 127.0.0.1 -P 3306 -u admin -p synergydb-- Query the same table created via PostgreSQL
SELECT id, name, email FROM users ORDER BY created_at;
-- MySQL-specific syntax works natively
INSERT INTO users (name, email) VALUES ('Dave Kim', 'dave@example.com');
SELECT COUNT(*) AS total_users FROM users;Connect via MongoDB
mongosh "mongodb:">//admin:password@localhost:27017/synergydb""color:#64748b">// The "users" table is accessible as a collection
db.users.find({ name: /^A/ })
"color:#64748b">// Insert a document -- it appears as a row in SQL protocols
db.users.insertOne({
name: "Eve Park",
email: "eve@example.com",
created_at: new Date()
})
"color:#64748b">// Aggregation pipeline
db.users.aggregate([
{ $group: { _id: null, count: { $sum: 1 } } }
])Connect via Redis
redis-cli -h localhost -p 6379"color:#64748b"># Access user records as Redis hashes
HGETALL users:1
"color:#64748b"># Set a cache value
SET session:abc123 '{"user_id":1,"role":"admin"}' EX 3600
"color:#64748b"># Query keys by pattern
KEYS users:*
"color:#64748b"># Use Redis data structures alongside SQL tables
LPUSH notifications '{"type":"welcome","user_id":5}'
LRANGE notifications 0 -1SET and HSET commands is stored in the unified storage engine and can be queried via SQL using the redis_keys virtual table.Protocol Guides
SynergyDB implements each wire protocol at the network layer, providing native compatibility with existing clients, drivers, and ORMs. Each protocol handler translates incoming queries into the unified internal query plan representation before executing against the shared storage engine.
PostgreSQL Protocol
SynergyDB implements PostgreSQL wire protocol v3 (PostgreSQL 15+ compatible). All standard PostgreSQL clients, drivers, and ORMs connect without modification.
Connection String
postgresql:"color:#64748b">//user:password@host:5432/database?sslmode=require
"color:#64748b"># With all parameters
postgresql:"color:#64748b">//admin:s3cureP4ss@synergydb.example.com:5432/myapp?sslmode=verify-full&sslrootcert=/path/to/ca.crt&connect_timeout=10&application_name=myserviceSupported Features
ORM Compatibility
The following ORMs and query builders have been tested and verified with SynergyDB's PostgreSQL endpoint:
| ORM / Driver | Language | Status | Notes |
|---|---|---|---|
| Prisma | TypeScript | Full | Use postgresql provider. Migrations and introspection supported. |
| Drizzle ORM | TypeScript | Full | pg driver mode. Push and generate both work. |
| TypeORM | TypeScript | Full | postgres driver type. All decorators supported. |
| SQLAlchemy | Python | Full | psycopg2 and asyncpg dialects. Alembic migrations work. |
| ActiveRecord | Ruby | Full | postgresql adapter. Rails migrations fully supported. |
| GORM | Go | Full | Use postgres driver. AutoMigrate supported. |
| Diesel | Rust | Full | postgres feature flag. Compile-time query checking works. |
| JDBC | Java | Full | PostgreSQL JDBC driver 42.x. HikariCP connection pooling supported. |
"color:#64748b">// Prisma schema for SynergyDB (PostgreSQL protocol)
datasource db {
provider = "postgresql"
url = "postgresql:">//admin:password@localhost:5432/myapp"
}
generator client {
provider = "prisma-client-js"
}
"color:#64748b">// Models work exactly as with native PostgreSQL
"color:#64748b">// SynergyDB handles schema introspection and migrationsMySQL Protocol
SynergyDB implements the MySQL wire protocol (MySQL 8.0 compatible), supporting the COM_QUERY, COM_STMT_PREPARE, COM_STMT_EXECUTE, and COM_STMT_CLOSE command types. Authentication uses caching_sha2_password by default.
Connection String
mysql:"color:#64748b">//user:password@host:3306/database
"color:#64748b"># DSN format (Go, PHP)
user:password@tcp(host:3306)/database?tls=true&parseTime=true&multiStatements=true
"color:#64748b"># JDBC
jdbc:mysql:"color:#64748b">//host:3306/database?useSSL=true&serverTimezone=UTCSupported Features
ORM Compatibility
| ORM / Driver | Language | Status |
|---|---|---|
| Prisma | TypeScript | Full |
| Drizzle ORM | TypeScript | Full |
| Sequelize | JavaScript | Full |
| SQLAlchemy | Python | Full |
| Django ORM | Python | Full |
| Laravel Eloquent | PHP | Full |
| GORM | Go | Full |
| Hibernate | Java | Full |
MongoDB Protocol
SynergyDB speaks the MongoDB wire protocol (OP_MSG format, MongoDB 6.0+ compatible). It translates BSON documents into the internal columnar representation while preserving full document semantics including nested objects, arrays, and schemaless flexibility.
Connection String
mongodb:"color:#64748b">//user:password@host:27017/database
"color:#64748b"># Replica set (SynergyDB cluster)
mongodb:"color:#64748b">//user:password@node1:27017,node2:27017,node3:27017/database?replicaSet=synergy-rs&authSource=admin
"color:#64748b"># With TLS
mongodb:"color:#64748b">//user:password@host:27017/database?tls=true&tlsCAFile=/path/to/ca.pemSupported Features
Driver Compatibility
Official MongoDB drivers for all major languages work without modification. This includes the Node.js driver (mongodb), PyMongo, the Java driver, Motor (async Python), the Go driver, and Mongoid (Ruby). Mongoose ODM is fully supported including schema definitions, middleware, and population.
"color:#64748b">// Node.js MongoDB driver -- connects to SynergyDB without changes
const { MongoClient } = require('mongodb')
const client = new MongoClient('mongodb:"color:#64748b">//admin:password@localhost:27017')
await client.connect()
const db = client.db('myapp')
const users = db.collection('users')
"color:#64748b">// Aggregation pipeline with $lookup (joins across collections)
const results = await users.aggregate([
{ $match: { status: 'active' } },
{ $lookup: {
from: 'orders',
localField: '_id',
foreignField: 'user_id',
as: 'orders'
}},
{ $addFields: { order_count: { $size: '$orders' } } },
{ $sort: { order_count: -1 } },
{ $limit: 10 }
]).toArray()
"color:#64748b">// Change streams for real-time updates
const changeStream = users.watch()
changeStream.on('change', (change) => {
console.log('Change detected:', change.operationType)
})Redis Protocol
SynergyDB implements the RESP3 protocol (Redis 7.0+ compatible). Unlike standalone Redis, data stored through the Redis protocol is persisted in the same durable storage engine used by SQL protocols, providing full ACID guarantees and replication.
Connection String
redis:"color:#64748b">//user:password@host:6379/0
"color:#64748b"># With TLS
rediss:"color:#64748b">//user:password@host:6379/0
"color:#64748b"># Cluster mode (SynergyDB automatically handles slot distribution)
redis:"color:#64748b">//host:6379?cluster=trueSupported Commands
SynergyDB supports 200+ Redis commands across all major data types. The following command groups are fully implemented:
Data Type Mapping
When data is accessed across protocols, SynergyDB transparently maps between Redis data types and relational / document structures:
| Redis Type | SQL Representation | MongoDB Representation |
|---|---|---|
| String | TEXT column in redis_keys table | String field in redis_keys collection |
| Hash | Row in the corresponding table (fields map to columns) | Document with fields as top-level keys |
| List | JSONB array in redis_keys table | Array field |
| Set | JSONB array (deduplicated) in redis_keys table | Array field (deduplicated) |
| Sorted Set | Rows in redis_zsets table with score column | Array of {value, score} objects |
| Stream | Rows in redis_streams table with timestamp ordering | Capped collection with _id as stream ID |
Neo4j Bolt Protocol
SynergyDB implements Neo4j Bolt protocol versions 4.x and 5.x, providing wire-level compatibility with official Neo4j drivers. Connect from any Neo4j client to run Cypher queries against SynergyDB's graph data model. Graph data is stored natively using adjacency list indexes with BFS/DFS and shortest path support.
Connection String
bolt:"color:#64748b">//user:password@host:7687
"color:#64748b"># With TLS
bolt+s:"color:#64748b">//user:password@host:7687
"color:#64748b"># Neo4j URI scheme (also supported)
neo4j:"color:#64748b">//user:password@host:7687Supported Features
Cypher Query Examples
"color:#64748b">// Find direct connections
SELECT * FROM GRAPH_MATCH(
(person:User {name: 'Alice'})-[:FOLLOWS]->(friend:User)
);
"color:#64748b">// Friends of friends (2 hops)
SELECT * FROM GRAPH_MATCH(
(person:User {name: 'Alice'})-[:FOLLOWS*2]->(fof:User)
);
"color:#64748b">// Variable-length paths (1 to 5 hops)
SELECT * FROM GRAPH_MATCH(
(a:User)-[:FOLLOWS*1..5]->(b:User)
)
WHERE a.id = 1;
"color:#64748b">// Shortest path
SELECT * FROM SHORTEST_PATH(
(start:User {id: 1}),
(end:User {id: 100}),
[:FOLLOWS]
);Driver Compatibility
Official Neo4j drivers for all major languages connect without modification, including the Python neo4j driver, JavaScript neo4j-driver, Java driver, Go driver, and .NET driver.
"color:#64748b"># Python with neo4j driver
from neo4j import GraphDatabase
driver = GraphDatabase.driver("bolt:">//localhost:7687", auth=("admin", "password"))
with driver.session() as session:
result = session.run("""
MATCH (u:User {name: $name})-[:FOLLOWS]->(friend)
RETURN friend.name AS name, friend.email AS email
""", name="Alice")
for record in result:
print(f"{record['name']}: {record['email']}")HTTP/REST API
SynergyDB provides a comprehensive REST API on port 8080 for database operations, including query execution, schema management, vector search, and an OpenAI-compatible embeddings endpoint. The API uses JSON for request/response payloads and supports streaming via NDJSON and Server-Sent Events.
Base URL
http:"color:#64748b">//localhost:8080/api/v1Query Endpoint
curl -X POST http:"color:#64748b">//localhost:8080/api/v1/query \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"query": "SELECT id, name FROM users WHERE created_at > $1",
"params": ["2024-01-01"],
"language": "sql"
}'Supported Features
gRPC API
SynergyDB provides a high-performance gRPC API on port 9090 using Protocol Buffers for efficient binary serialization. The gRPC API supports bidirectional streaming for real-time operations and automatic client generation for many languages.
Key Operations
"color:#64748b"># Python gRPC client example
import grpc
from synergydb.v1 import synergydb_pb2, synergydb_pb2_grpc
channel = grpc.insecure_channel('localhost:9090')
stub = synergydb_pb2_grpc.SynergyDBStub(channel)
"color:#64748b"># Execute a SQL query
response = stub.Query(synergydb_pb2.QueryRequest(
query="SELECT * FROM users LIMIT 10",
language="sql"
))
"color:#64748b"># Vector similarity search
results = stub.VectorSearch(synergydb_pb2.VectorSearchRequest(
table="documents",
column="embedding",
vector=[0.1, 0.2, 0.3],
top_k=10,
metric="cosine"
))Data Models
SynergyDB natively supports 5 data models in a single engine: relational (SQL), document (JSON/BSON), graph (Cypher), vector (HNSW similarity search), and time-series. This section covers the advanced query features for vector, graph, and full-text search, plus the native SynergyQL cross-model query language.
Vector Search
SynergyDB provides first-class VECTOR(n) type support with HNSW indexing for high-performance similarity search. Vectors are stored natively in the LSM-tree storage engine and support hybrid queries combining vector similarity with SQL filters.
Creating Vector Tables
-- Table with vector column for embeddings
CREATE TABLE documents (
id INTEGER PRIMARY KEY,
title TEXT,
content TEXT,
embedding VECTOR(1536) -- OpenAI embedding dimension
);
-- Create HNSW index with custom parameters
CREATE INDEX idx_docs_embedding ON documents
USING HNSW (embedding vector_cosine_ops)
WITH (m = 16, ef_construction = 200);
-- Parameters:
-- m: Number of connections per layer (default: 16)
-- ef_construction: Search width during construction (default: 200)
-- Higher values = better recall, slower insertSimilarity Search
-- L2 distance search using <-> operator
SELECT id, title, embedding <-> '[0.1, 0.2, ...]' AS distance
FROM documents
ORDER BY embedding <-> '[0.1, 0.2, ...]'
LIMIT 10;
-- Cosine similarity search
SELECT id, title
FROM documents
ORDER BY COSINE_DISTANCE(embedding, '[0.1, 0.2, ...]')
LIMIT 10;
-- Hybrid search (vector + SQL filters)
SELECT id, title
FROM documents
WHERE category = 'tech'
AND created_at > '2024-01-01'
ORDER BY embedding <-> $1
LIMIT 10;
-- Vector with metadata filtering
SELECT id, title, embedding <-> $1 AS score
FROM documents
WHERE tags @> ARRAY['ai', 'ml']
ORDER BY score
LIMIT 10;ef_construction = 200 and m = 16 for a good balance between recall and insert speed.Graph Queries
SynergyDB supports Cypher-compatible graph queries with native adjacency list indexes. Graph data is stored alongside relational and document data in the same storage engine, enabling cross-model queries that combine graph traversal with SQL joins and vector similarity.
Graph Pattern Matching
-- Find direct connections
SELECT * FROM GRAPH_MATCH(
(person:User {name: 'Alice'})-[:FOLLOWS]->(friend:User)
);
-- Friends of friends
SELECT * FROM GRAPH_MATCH(
(person:User {name: 'Alice'})-[:FOLLOWS*2]->(fof:User)
);
-- Variable-length paths (1 to 5 hops)
SELECT * FROM GRAPH_MATCH(
(a:User)-[:FOLLOWS*1..5]->(b:User)
)
WHERE a.id = 1;
-- Shortest path between two nodes
SELECT * FROM SHORTEST_PATH(
(start:User {id: 1}),
(end:User {id: 100}),
[:FOLLOWS]
);Graph Indexes
-- Create a graph index for relationship traversal
CREATE INDEX idx_follows ON follows
USING GRAPH (from_user, to_user);
-- The graph index enables efficient BFS/DFS traversal
-- and shortest path computation across millions of edgesFull-Text Search
SynergyDB includes Tantivy-powered full-text search with BM25 scoring. Full-text indexes support phrase search, fuzzy matching, boolean operators, and relevance ranking. Indexes are maintained automatically and available across all protocols.
-- Create a full-text search index
CREATE INDEX idx_docs_content ON documents
USING FULLTEXT (content);
-- Basic full-text search
SELECT id, title, content
FROM documents
WHERE MATCH(content, 'machine learning');
-- Full-text with BM25 ranking
SELECT id, title, SCORE() as relevance
FROM documents
WHERE MATCH(content, 'neural networks')
ORDER BY relevance DESC
LIMIT 10;
-- Phrase search
SELECT * FROM documents
WHERE MATCH(content, '"deep learning"');
-- Fuzzy search (allow 2 character edits)
SELECT * FROM documents
WHERE MATCH(content, 'neurl~2');
-- Boolean operators
SELECT * FROM documents
WHERE MATCH(content, 'machine AND learning NOT python');SynergyQL
SynergyQL is SynergyDB's native unified query language that enables cross-model queries spanning relational, document, graph, and vector data in a single statement. It extends SQL with GRAPH_MATCH clauses, vector operators, and document path expressions.
-- Cross-model query: combine graph traversal with vector similarity
SELECT
friend.name,
doc.title,
COSINE_DISTANCE(doc.embedding, $query_vector) AS similarity
FROM GRAPH_MATCH(
(me:User {id: $user_id})-[:FOLLOWS]->(friend:User)
)
JOIN documents doc ON doc.author_id = friend.id
WHERE MATCH(doc.content, 'machine learning')
ORDER BY similarity ASC
LIMIT 10;
-- Query that spans relational + document + vector
SELECT
u.name,
u.metadata->>'department' AS dept,
d.title,
d.embedding <-> $query AS score
FROM users u
JOIN documents d ON d.author_id = u.id
WHERE u.metadata @> '{"role": "researcher"}'
ORDER BY score
LIMIT 5;AI & Embeddings
SynergyDB includes built-in support for generating and managing vector embeddings through OpenAI and HuggingFace APIs. The embeddings API is OpenAI-compatible, making it easy to integrate with LangChain, LlamaIndex, and other AI frameworks.
Embedding API
SynergyDB exposes an OpenAI-compatible embeddings endpoint at /v1/embeddings on the HTTP port (8080). This allows any OpenAI SDK or compatible client to generate embeddings through SynergyDB.
Configuration
"color:#64748b"># Set your API key for the embedding provider
export OPENAI_API_KEY="sk-..."
"color:#64748b"># Start SynergyDB -- it will automatically use OpenAI for embeddings
synergydb start --config /etc/synergydb/config.yamlGenerate Embeddings via HTTP
curl -X POST http:"color:#64748b">//localhost:8080/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"input": "Your text here",
"model": "text-embedding-3-small"
}'Supported Providers
| Provider | Models | Notes |
|---|---|---|
| OpenAI | text-embedding-3-small, text-embedding-3-large, ada-002 | Requires OPENAI_API_KEY |
| HuggingFace | sentence-transformers and other models | Requires HUGGINGFACE_API_KEY |
| Mock | Random vectors for testing | Enabled by default if no API key |
LangChain & LlamaIndex Integration
SynergyDB's OpenAI-compatible API means it integrates seamlessly with LangChain, LlamaIndex, and other AI orchestration frameworks. Point the framework's embedding client at SynergyDB's HTTP endpoint.
LangChain
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import PGVector
"color:#64748b"># Use SynergyDB as the embedding provider
embeddings = OpenAIEmbeddings(
model="text-embedding-3-small",
openai_api_base="http:">//localhost:8080/v1",
openai_api_key="unused" "color:#64748b"># Not validated by SynergyDB
)
"color:#64748b"># Use SynergyDB as the vector store (PostgreSQL protocol)
vectorstore = PGVector(
connection_string="postgresql:">//admin:password@localhost:5432/myapp",
embedding_function=embeddings,
collection_name="documents"
)
"color:#64748b"># Similarity search
results = vectorstore.similarity_search("machine learning papers", k=5)LlamaIndex
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.postgres import PGVectorStore
"color:#64748b"># Connect to SynergyDB via PostgreSQL protocol
vector_store = PGVectorStore.from_params(
host="localhost",
port="5432",
user="admin",
password="password",
database="myapp",
table_name="documents",
embed_dim=1536
)
"color:#64748b"># Load and index documents
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(
documents, vector_store=vector_store
)
"color:#64748b"># Query
query_engine = index.as_query_engine()
response = query_engine.query("What are the key findings?")Migration Guides
SynergyDB provides built-in tooling for zero-downtime migration from existing database instances. The migration engine supports live replication, allowing you to run both systems in parallel until you are confident in the cutover. Each migration follows the same six-step process tailored to the source database.
Migrate from PostgreSQL
Step 1: Pre-migration Checklist
Step 2: Configure Source Connection
migration:
source:
type: postgresql
host: source-pg.example.com
port: 5432
database: production
user: replication_user
password: {PGPASSWORD}
ssl_mode: require
target:
database: production
options:
include_schemas:
- public
- app
exclude_tables:
- pg_stat_*
- _prisma_migrations
parallel_workers: 4
batch_size: 10000Step 3: Start Live Replication
"color:#64748b"># Start the migration (initial snapshot + continuous replication)
synergydb migrate start --config migration-pg.yaml
"color:#64748b"># Monitor progress
synergydb migrate status --migration-id pg-20260210-001The migration engine performs an initial consistent snapshot of all tables, then switches to streaming logical replication to capture ongoing changes. The migrate status command shows current lag, rows transferred, and estimated time remaining.
Step 4: Validate Data Consistency
"color:#64748b"># Run row-count and checksum validation across all tables
synergydb migrate validate --migration-id pg-20260210-001
"color:#64748b"># Sample output:
"color:#64748b"># Table Source Rows Target Rows Checksum Match
"color:#64748b"># public.users 1,284,531 1,284,531 OK
"color:#64748b"># public.orders 8,412,003 8,412,003 OK
"color:#64748b"># public.products 42,891 42,891 OK
"color:#64748b"># app.sessions 312,445 312,445 OK
"color:#64748b"># Validation PASSED: 4/4 tables consistentStep 5: Redirect Application Traffic
Update your application connection strings to point to SynergyDB. Because the PostgreSQL wire protocol is fully compatible, no application code changes are required. We recommend a phased rollout:
"color:#64748b"># Point read traffic to SynergyDB first (canary)
"color:#64748b"># Update your load balancer or DNS to route 10% of reads
"color:#64748b"># Once validated, route all reads
"color:#64748b"># Finally, redirect writes and stop replication
synergydb migrate cutover --migration-id pg-20260210-001Step 6: Decommission Source
After running on SynergyDB for a validation period (we recommend 7 days minimum), stop the replication slot on the source and decommission:
"color:#64748b"># Finalize the migration (cleans up replication slot on source)
synergydb migrate finalize --migration-id pg-20260210-001
"color:#64748b"># Verify no remaining lag
synergydb migrate status --migration-id pg-20260210-001
"color:#64748b"># Status: COMPLETEDMigrate from MySQL
Step 1: Pre-migration Checklist
Step 2: Configure Source Connection
migration:
source:
type: mysql
host: source-mysql.example.com
port: 3306
database: production
user: replication_user
password: {MYSQL_PASSWORD}
ssl_mode: required
target:
database: production
options:
include_databases:
- production
exclude_tables:
- _migrations
- sessions
parallel_workers: 4
batch_size: 10000
charset: utf8mb4Step 3: Start Live Replication
synergydb migrate start --config migration-mysql.yaml
"color:#64748b"># The engine reads the binlog from the current GTID position
synergydb migrate status --migration-id mysql-20260210-001Step 4: Validate Data Consistency
synergydb migrate validate --migration-id mysql-20260210-001
"color:#64748b"># Validates row counts, checksums, and schema compatibility
"color:#64748b"># Reports any data type mapping differencesStep 5: Redirect Application Traffic
Update application connection strings. MySQL-compatible applications connect to SynergyDB on port 3306 without code changes. The same phased rollout approach applies: route reads first, then writes.
synergydb migrate cutover --migration-id mysql-20260210-001Step 6: Decommission Source
synergydb migrate finalize --migration-id mysql-20260210-001Migrate from MongoDB
Step 1: Pre-migration Checklist
Step 2: Configure Source Connection
migration:
source:
type: mongodb
uri: mongodb:"color:#64748b">//repl_user:password@mongo1:27017,mongo2:27017,mongo3:27017/?replicaSet=rs0&authSource=admin
target:
database: production
options:
include_databases:
- production
- analytics
exclude_collections:
- system.profile
- tmp_*
parallel_workers: 8
batch_size: 5000
preserve_ids: true "color:#64748b"># Keep original _id valuesStep 3: Start Live Replication
synergydb migrate start --config migration-mongo.yaml
"color:#64748b"># Uses change streams for continuous replication
synergydb migrate status --migration-id mongo-20260210-001Step 4: Validate Data Consistency
synergydb migrate validate --migration-id mongo-20260210-001
"color:#64748b"># Validates document counts, samples documents for field-level comparison,
"color:#64748b"># and verifies all indexes have been recreatedStep 5: Redirect Application Traffic
Update your MongoDB connection string to point to SynergyDB. All MongoDB drivers and ODMs (Mongoose, Mongoid, Motor) connect without changes. The migration engine continues capturing changes from the source until cutover.
synergydb migrate cutover --migration-id mongo-20260210-001Step 6: Decommission Source
synergydb migrate finalize --migration-id mongo-20260210-001
"color:#64748b"># Closes the change stream cursor and marks migration completeMigrate from SQLite
SynergyDB supports direct import from SQLite database files. Because SQLite does not support live replication, the migration is a one-time import operation that copies all tables, indexes, and data into SynergyDB.
Step 1: Configure Source
migration:
source:
type: sqlite
path: /path/to/database.sqlite
target:
database: production
options:
include_tables:
- users
- orders
- products
parallel_workers: 4
batch_size: 10000Step 2: Run Import
"color:#64748b"># Import all tables from SQLite
synergydb migrate start --config migration-sqlite.yaml
"color:#64748b"># Monitor progress
synergydb migrate status --migration-id sqlite-20260210-001Step 3: Validate
synergydb migrate validate --migration-id sqlite-20260210-001
"color:#64748b"># Validates row counts and schema mapping
"color:#64748b"># SQLite types are mapped to SynergyDB types automatically:
"color:#64748b"># INTEGER -> INTEGER, TEXT -> TEXT, REAL -> FLOAT, BLOB -> BYTESROWID values and recreates all indexes. Foreign key constraints are translated to SynergyDB equivalents.Configuration Reference
Complete reference of all configuration options available in config.yaml. Options marked with an asterisk (*) require a server restart to take effect; all others can be applied with synergydb config reload.
Cluster Settings
| Option | Type | Default | Description |
|---|---|---|---|
| cluster.node_id | string | auto | Unique identifier for this node. Auto-generated from hostname if not set. |
| cluster.advertise_addr | string | 0.0.0.0:9400 | Address and port this node advertises to peers for cluster communication. |
| cluster.peers | string[] | [] | List of peer addresses for cluster formation. Not required for single-node. |
| cluster.join_timeout | duration | 30s | Maximum time to wait for cluster join before starting in standalone mode. |
Protocol Settings
| Option | Type | Default | Description |
|---|---|---|---|
| protocols.postgresql.enabled | bool | true | Enable the PostgreSQL wire protocol listener. |
| protocols.postgresql.port * | int | 5432 | TCP port for PostgreSQL connections. |
| protocols.postgresql.max_connections | int | 500 | Maximum concurrent PostgreSQL connections. |
| protocols.postgresql.ssl | bool | true | Require TLS for PostgreSQL connections. |
| protocols.mysql.enabled | bool | true | Enable the MySQL wire protocol listener. |
| protocols.mysql.port * | int | 3306 | TCP port for MySQL connections. |
| protocols.mysql.max_connections | int | 500 | Maximum concurrent MySQL connections. |
| protocols.mongodb.enabled | bool | true | Enable the MongoDB wire protocol listener. |
| protocols.mongodb.port * | int | 27017 | TCP port for MongoDB connections. |
| protocols.mongodb.max_connections | int | 1000 | Maximum concurrent MongoDB connections. |
| protocols.redis.enabled | bool | true | Enable the Redis protocol listener. |
| protocols.redis.port * | int | 6379 | TCP port for Redis connections. |
| protocols.redis.max_connections | int | 10000 | Maximum concurrent Redis connections. |
| protocols.redis.requirepass | string | "" | Password for Redis AUTH command. Empty disables AUTH. |
| protocols.bolt.enabled | bool | true | Enable the Neo4j Bolt protocol listener. |
| protocols.bolt.port * | int | 7687 | TCP port for Neo4j Bolt connections. |
| protocols.bolt.max_connections | int | 500 | Maximum concurrent Bolt connections. |
| protocols.http.enabled | bool | true | Enable the HTTP/REST API listener. |
| protocols.http.port * | int | 8080 | TCP port for HTTP/REST and GraphQL. |
| protocols.grpc.enabled | bool | true | Enable the gRPC API listener. |
| protocols.grpc.port * | int | 9090 | TCP port for gRPC connections. |
Vector Settings
| Option | Type | Default | Description |
|---|---|---|---|
| vector.default_index | string | hnsw | Default vector index type for VECTOR columns. |
| vector.hot_threshold_secs | int | 3600 | Seconds before vectors move from hot (HNSW) to warm tier. |
| vector.warm_threshold_secs | int | 604800 | Seconds before vectors move from warm to cold tier (IVF-PQ). |
Storage Settings
| Option | Type | Default | Description |
|---|---|---|---|
| storage.engine * | string | synergy | Storage engine backend: synergy, rocksdb, or hybrid. |
| storage.data_dir * | string | /var/lib/synergydb/data | Directory for data files. Must exist and be writable. |
| storage.wal_dir * | string | /var/lib/synergydb/wal | Directory for write-ahead log. Use a separate disk for best performance. |
| storage.max_wal_size | string | 4GB | Maximum WAL size before triggering a checkpoint. |
| storage.compaction_style | string | leveled | Compaction strategy: leveled, universal, or fifo. |
| storage.bloom_filter_bits | int | 10 | Bits per key for bloom filters. Higher values reduce false positives. |
| storage.block_cache_size | string | 8GB | Size of the in-memory block cache for data reads. |
| storage.write_buffer_size | string | 256MB | Size of each memtable write buffer. |
| storage.max_write_buffer_number | int | 4 | Maximum number of memtable write buffers before stalling writes. |
Replication Settings
| Option | Type | Default | Description |
|---|---|---|---|
| replication.mode | string | synchronous | Replication mode: synchronous, asynchronous, or quorum. |
| replication.quorum_size | int | 2 | Number of nodes that must acknowledge a write in quorum mode. |
| replication.wal_shipping | bool | true | Ship WAL segments to replicas for point-in-time recovery. |
| replication.snapshot_interval | duration | 6h | Interval between automated base backups / snapshots. |
| replication.max_lag_bytes | int | 67108864 | Maximum replication lag in bytes before alerting. Default 64 MB. |
Security Settings
| Option | Type | Default | Description |
|---|---|---|---|
| security.authentication | string | scram-sha-256 | Authentication mechanism: scram-sha-256, ldap, or oidc. |
| security.encryption_at_rest | bool | true | Enable transparent data encryption at rest. |
| security.encryption_algorithm | string | AES-256-GCM | Encryption algorithm for data at rest. |
| security.master_key_provider | string | vault | Key management system: vault, aws-kms, gcp-kms, or local. |
| security.audit_log.enabled | bool | true | Enable audit logging of all authentication and query events. |
| security.audit_log.include_queries | bool | true | Include full SQL / command text in audit log entries. |
| security.tls.min_version | string | 1.3 | Minimum TLS version. Must be 1.2 or 1.3. |
Telemetry Settings
| Option | Type | Default | Description |
|---|---|---|---|
| telemetry.metrics.enabled | bool | true | Expose Prometheus-compatible metrics endpoint. |
| telemetry.metrics.port | int | 9090 | Port for the metrics HTTP endpoint. |
| telemetry.tracing.enabled | bool | true | Enable distributed tracing via OpenTelemetry. |
| telemetry.tracing.exporter | string | otlp | Trace exporter: otlp, jaeger, or zipkin. |
| telemetry.logging.level | string | info | Log level: debug, info, warn, or error. |
| telemetry.logging.format | string | json | Log format: json or text. |
API Reference
SynergyDB exposes a REST management API on port 9090 (configurable via telemetry.metrics.port). All endpoints require a valid API key passed in the Authorization header using the Bearer scheme. Responses use JSON.
/api/v1/statusReturns the current health status of the SynergyDB instance, including uptime, version, cluster membership, and resource utilization.
curl -H "Authorization: Bearer $API_KEY" \
https:"color:#64748b">//synergydb.example.com:9090/api/v1/status{
"status": "healthy",
"version": "0.1.0",
"uptime_seconds": 864231,
"node_id": "node-1",
"cluster": {
"state": "leader",
"peers": 2,
"healthy_peers": 2
},
"protocols": {
"postgresql": { "state": "listening", "connections": 142 },
"mysql": { "state": "listening", "connections": 89 },
"mongodb": { "state": "listening", "connections": 203 },
"redis": { "state": "listening", "connections": 1847 }
},
"storage": {
"data_size_bytes": 107374182400,
"wal_size_bytes": 2147483648,
"disk_usage_percent": 34.2
}
}/api/v1/protocolsReturns detailed information about each protocol listener, including configuration, connection statistics, and throughput metrics.
curl -H "Authorization: Bearer $API_KEY" \
https:"color:#64748b">//synergydb.example.com:9090/api/v1/protocols{
"protocols": [
{
"name": "postgresql",
"version": "15.0",
"port": 5432,
"ssl": true,
"max_connections": 500,
"active_connections": 142,
"queries_per_second": 3847.2,
"avg_latency_ms": 1.23
},
{
"name": "mysql",
"version": "8.0",
"port": 3306,
"ssl": true,
"max_connections": 500,
"active_connections": 89,
"queries_per_second": 2104.8,
"avg_latency_ms": 1.45
},
{
"name": "mongodb",
"version": "6.0",
"port": 27017,
"ssl": true,
"max_connections": 1000,
"active_connections": 203,
"operations_per_second": 8921.5,
"avg_latency_ms": 0.87
},
{
"name": "redis",
"version": "7.0",
"port": 6379,
"ssl": false,
"max_connections": 10000,
"active_connections": 1847,
"commands_per_second": 124503.1,
"avg_latency_ms": 0.12
}
]
}/api/v1/migrationsStarts a new migration job. Accepts a migration configuration as the request body and returns a migration ID for tracking progress.
curl -X POST \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"source": {
"type": "postgresql",
"host": "source-pg.example.com",
"port": 5432,
"database": "production",
"user": "replication_user",
"password": "secret"
},
"options": {
"parallel_workers": 4,
"batch_size": 10000,
"include_schemas": ["public"]
}
}' \
https:"color:#64748b">//synergydb.example.com:9090/api/v1/migrations{
"migration_id": "pg-20260210-001",
"status": "initializing",
"source_type": "postgresql",
"created_at": "2026-02-10T14:30:00Z",
"estimated_rows": 12847320,
"tables_discovered": 24
}/api/v1/metricsReturns Prometheus-compatible metrics in text exposition format. This endpoint is typically scraped by Prometheus or a compatible agent. Metrics include per-protocol connection counts, query latency histograms, storage utilization, replication lag, and cache hit rates.
curl -H "Authorization: Bearer $API_KEY" \
https:"color:#64748b">//synergydb.example.com:9090/api/v1/metrics"color:#64748b"># HELP synergydb_connections_active Current active connections per protocol
"color:#64748b"># TYPE synergydb_connections_active gauge
synergydb_connections_active{protocol="postgresql"} 142
synergydb_connections_active{protocol="mysql"} 89
synergydb_connections_active{protocol="mongodb"} 203
synergydb_connections_active{protocol="redis"} 1847
"color:#64748b"># HELP synergydb_query_duration_seconds Query latency histogram
"color:#64748b"># TYPE synergydb_query_duration_seconds histogram
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="0.001"} 48291
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="0.01"} 89412
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="0.1"} 91003
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="1"} 91204
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="+Inf"} 91210
"color:#64748b"># HELP synergydb_storage_bytes Total storage size in bytes
"color:#64748b"># TYPE synergydb_storage_bytes gauge
synergydb_storage_bytes{type="data"} 107374182400
synergydb_storage_bytes{type="wal"} 2147483648
"color:#64748b"># HELP synergydb_replication_lag_bytes Replication lag in bytes
"color:#64748b"># TYPE synergydb_replication_lag_bytes gauge
synergydb_replication_lag_bytes{peer="node-2"} 4096
synergydb_replication_lag_bytes{peer="node-3"} 8192
"color:#64748b"># HELP synergydb_cache_hit_ratio Block cache hit ratio
"color:#64748b"># TYPE synergydb_cache_hit_ratio gauge
synergydb_cache_hit_ratio 0.9847Security & Compliance
SynergyDB is designed from the ground up for regulated environments. Every layer of the stack -- from network transport to data at rest -- implements defense-in-depth security controls. This section covers encryption, access control, audit logging, and compliance certifications.
Encryption at Rest (AES-256)
All data files, WAL segments, and snapshots are encrypted using AES-256-GCM authenticated encryption. The data encryption key (DEK) is itself encrypted by a master key managed through your preferred KMS provider (HashiCorp Vault, AWS KMS, or GCP Cloud KMS). Key rotation is performed online without downtime using envelope encryption.
security:
encryption_at_rest: true
encryption_algorithm: AES-256-GCM
master_key_provider: vault
vault_addr: https:"color:#64748b">//vault.internal:8200
vault_path: secret/synergydb/master-key
key_rotation_interval: 90d "color:#64748b"># Rotate DEK every 90 daysEncryption in Transit (TLS 1.3)
All client-facing protocol listeners and internal cluster communication channels enforce TLS 1.3 by default. SynergyDB supports automatic certificate renewal via ACME (Let's Encrypt) or manual certificate provisioning.
security:
tls:
min_version: '1.3'
cipher_suites:
- TLS_AES_256_GCM_SHA384
- TLS_CHACHA20_POLY1305_SHA256
cert_file: /etc/synergydb/certs/server.crt
key_file: /etc/synergydb/certs/server.key
ca_file: /etc/synergydb/certs/ca.crt
client_auth: require "color:#64748b"># mutual TLSRow-Level Security
SynergyDB supports row-level security (RLS) policies that restrict which rows a given user or role can access. RLS policies are defined using SQL and are enforced transparently across all protocols -- a policy defined via PostgreSQL is also enforced for queries arriving through MySQL, MongoDB, and Redis.
-- Enable row-level security on the orders table
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;
-- Users can only see their own orders
CREATE POLICY user_isolation ON orders
USING (user_id = current_user_id());
-- Admins can see all orders
CREATE POLICY admin_access ON orders
TO admin_role
USING (true);
-- The policy applies to all protocols:
-- PostgreSQL: SELECT * FROM orders (filtered by policy)
-- MySQL: SELECT * FROM orders (filtered by policy)
-- MongoDB: db.orders.find() (filtered by policy)
-- Redis: HGETALL orders:* (filtered by policy)Audit Logging
Every authentication event, DDL statement, and data access operation is logged to a tamper-evident audit log. The audit log captures the user identity, source IP, protocol used, full query text, affected rows, and timestamp. Logs can be shipped to SIEM systems via syslog, Kafka, or S3.
{
"timestamp": "2026-02-10T14:32:18.445Z",
"event_type": "query",
"user": "app_service",
"source_ip": "10.0.2.15",
"protocol": "postgresql",
"database": "production",
"query": "SELECT * FROM users WHERE id = $1",
"params": ["42"],
"rows_affected": 1,
"duration_ms": 0.82,
"status": "success"
}Role-Based Access Control (RBAC)
SynergyDB implements a unified RBAC system that spans all protocols. Roles and permissions defined through any protocol are enforced across all others. The permission model supports database-level, schema-level, table-level, and column-level grants.
-- Create roles
CREATE ROLE readonly;
CREATE ROLE readwrite;
CREATE ROLE admin_role;
-- Grant permissions
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO readwrite;
GRANT ALL PRIVILEGES ON DATABASE production TO admin_role;
-- Assign users to roles
GRANT readonly TO analyst_user;
GRANT readwrite TO app_service;
GRANT admin_role TO dba_user;
-- Column-level security
GRANT SELECT (id, name, email) ON users TO readonly;
-- readonly cannot access: users.ssn, users.salaryHIPAA Compliance Checklist
SOC 2 Type II
SynergyDB Cloud (the managed offering) holds SOC 2 Type II certification covering the Security, Availability, and Confidentiality trust service criteria. The audit report is available upon request under NDA. Self-hosted deployments can leverage SynergyDB's built-in security controls to satisfy SOC 2 requirements within your own infrastructure audit scope. Key controls include immutable audit logs, automated access reviews, encrypted backups with tested restore procedures, and continuous monitoring via the metrics API.
Performance Tuning
SynergyDB ships with sensible defaults, but production workloads benefit from tuning based on your hardware profile and access patterns. This section covers the four most impactful tuning areas.
Memory Allocation
SynergyDB uses memory for the block cache (read amplification), write buffers (write throughput), and per-connection state. The general rule of thumb is to allocate 60-70% of available RAM to the block cache and 10-15% to write buffers, leaving the rest for OS page cache and connection overhead.
"color:#64748b"># For a 64 GB server
storage:
block_cache_size: 40GB "color:#64748b"># 62.5% of RAM -- primary read cache
write_buffer_size: 512MB "color:#64748b"># Size of each memtable
max_write_buffer_number: 6 "color:#64748b"># Up to 3 GB total write buffers
max_open_files: 10000 "color:#64748b"># OS file descriptor limit
"color:#64748b"># Per-protocol connection memory
protocols:
postgresql:
work_mem: 64MB "color:#64748b"># Memory for sorts and hash joins per query
maintenance_work_mem: 512MB "color:#64748b"># Memory for VACUUM, CREATE INDEX
mysql:
sort_buffer_size: 64MB
join_buffer_size: 64MB
mongodb:
wired_tiger_cache_size: 8GB "color:#64748b"># Additional cache for document workloads
redis:
maxmemory: 4GB "color:#64748b"># Limit for Redis key-value data
maxmemory_policy: allkeys-lru "color:#64748b"># Eviction policy when limit is reachedsynergydb benchmark --duration 60s to profile your workload and get personalized tuning recommendations. The benchmark tool analyzes read/write ratios, key sizes, and query complexity to suggest optimal settings.Connection Pooling
SynergyDB includes a built-in connection pooler for the PostgreSQL and MySQL protocols, eliminating the need for external tools like PgBouncer or ProxySQL. The pooler supports transaction-mode and session-mode pooling.
protocols:
postgresql:
pooling:
enabled: true
mode: transaction "color:#64748b"># transaction | session
pool_size: 100 "color:#64748b"># Backend connections to storage engine
max_client_connections: 5000 "color:#64748b"># Client-facing connection limit
idle_timeout: 300s "color:#64748b"># Close idle backend connections after 5 min
connection_lifetime: 3600s "color:#64748b"># Recycle backend connections after 1 hour
mysql:
pooling:
enabled: true
mode: transaction
pool_size: 100
max_client_connections: 5000
idle_timeout: 300sTransaction-mode pooling multiplexes client connections onto a smaller pool of backend connections. A client holds a backend connection only for the duration of a transaction, allowing thousands of clients to share a much smaller pool. This is the recommended mode for stateless web applications.
Index Optimization
SynergyDB unifies indexing across all protocols. An index created through PostgreSQL accelerates queries arriving from any protocol. Use the built-in index advisor to identify missing indexes and redundant ones.
-- View the index advisor recommendations
SELECT * FROM synergydb_index_advisor();
-- Sample output:
-- recommendation | table | columns | estimated_speedup
-- CREATE INDEX | orders | (user_id, date) | 47x
-- CREATE INDEX | products | (category_id) | 12x
-- DROP INDEX | users | idx_users_tmp | saves 2.1 GB
-- Create a concurrent index (no table lock)
CREATE INDEX CONCURRENTLY idx_orders_user_date
ON orders (user_id, created_at DESC);
-- Partial index for hot data
CREATE INDEX idx_orders_active
ON orders (status, priority)
WHERE status IN ('pending', 'processing');
-- Covering index to avoid table lookups
CREATE INDEX idx_users_email_name
ON users (email) INCLUDE (name, created_at);
-- Monitor index usage
SELECT index_name, idx_scan, idx_tup_read, idx_tup_fetch, size_bytes
FROM synergydb_index_stats
ORDER BY idx_scan DESC;Query Cache Configuration
SynergyDB includes an intelligent query result cache that automatically invalidates entries when underlying data changes. The cache operates at the unified storage layer, so a cached result from a PostgreSQL query also benefits identical queries arriving via MySQL.
storage:
query_cache:
enabled: true
max_size: 2GB "color:#64748b"># Maximum memory for cached results
max_entry_size: 16MB "color:#64748b"># Skip caching results larger than this
ttl: 300s "color:#64748b"># Default TTL for cache entries
invalidation: immediate "color:#64748b"># immediate | lazy
exclude_patterns:
- "SELECT.*NOW()" "color:#64748b"># Never cache time-dependent queries
- "SELECT.*RANDOM()" "color:#64748b"># Never cache non-deterministic queries
"color:#64748b"># Per-protocol cache hints
postgresql:
enable_prepared_cache: true "color:#64748b"># Cache prepared statement plans
plan_cache_size: 500 "color:#64748b"># Max cached plans per connection
redis:
passthrough: true "color:#64748b"># Redis GET/SET bypass the query cache
"color:#64748b"># (Redis protocol has its own semantics)You can control caching per-query using SQL hints:
-- Force cache bypass for a specific query
SELECT /*+ NO_CACHE */ * FROM orders WHERE status = 'pending';
-- Set a custom TTL for a specific query
SELECT /*+ CACHE_TTL(60) */ count(*) FROM users;
-- View cache statistics
SELECT hit_count, miss_count, eviction_count,
hit_count::float / (hit_count + miss_count) AS hit_ratio,
memory_used_bytes, entry_count
FROM synergydb_query_cache_stats;max_size. The invalidation overhead can outweigh the read benefits when data changes frequently. Monitor the hit ratio via synergydb_query_cache_stats to make data-driven decisions.Last updated: February 2026 · SynergyDB v0.1.0
Found an error? Edit this page on GitHub