Docs/SynergyDB v0.1

Documentation

Everything you need to install, configure, and operate SynergyDB in production. This guide covers all 16+ wire protocols, 5 data models (relational, document, graph, vector, time-series), AI/embedding integration, migration workflows, security hardening, and performance tuning.

Getting Started

SynergyDB is an AI-native universal database written in Rust. It ships as a single binary and an official Docker image. A typical deployment exposes 16+ protocol ports simultaneously, each handling the corresponding wire protocol natively. All protocols read from and write to the same underlying LSM-tree storage engine, so data written through one protocol is instantly visible through any other. SynergyDB supports 5 data models: relational, document, graph, vector, and time-series.

Installation

Choose the method that fits your environment. Docker is recommended for evaluation and development; the binary install is preferred for bare-metal production deployments where you need fine-grained control over systemd units, file descriptors, and kernel parameters.

Docker (recommended)

terminalbash
"color:#64748b"># Docker (recommended)
docker run -d --name synergydb \
  -p 5432:5432 \
  -p 3306:3306 \
  -p 27017:27017 \
  -p 6379:6379 \
  -p 7687:7687 \
  -p 8080:8080 \
  -p 9090:9090 \
  synergytechnologies/synergydb:latest

The image defaults to the bundled configuration. Mount a custom config file to override any setting:

terminalbash
docker run -d --name synergydb \
  -v /etc/synergydb/config.yaml:/etc/synergydb/config.yaml \
  -p 5432:5432 \
  -p 3306:3306 \
  -p 27017:27017 \
  -p 6379:6379 \
  -p 7687:7687 \
  -p 8080:8080 \
  -p 9090:9090 \
  synergytechnologies/synergydb:latest

Binary Install

terminalbash
"color:#64748b"># Binary
curl -fsSL https:"color:#64748b">//get.synergydb.io | sh
synergydb start --config /etc/synergydb/config.yaml

Package Managers

terminalbash
"color:#64748b"># macOS
brew install synergytechnologies/tap/synergydb

"color:#64748b"># Debian / Ubuntu
curl -fsSL https:"color:#64748b">//repo.synergydb.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/synergydb.gpg
echo "deb [signed-by=/usr/share/keyrings/synergydb.gpg] https:">//repo.synergydb.io/apt stable main" \
  | sudo tee /etc/apt/sources.list.d/synergydb.list
sudo apt update && sudo apt install synergydb

"color:#64748b"># RHEL / Fedora
sudo dnf install -y https:"color:#64748b">//repo.synergydb.io/rpm/synergydb-latest-1.x86_64.rpm
SynergyDB requires a minimum of 2 CPU cores and 4 GB RAM for evaluation. Production deployments should allocate at least 8 cores and 32 GB RAM.

Configuration

All SynergyDB settings live in a single YAML file, typically at /etc/synergydb/config.yaml. Every option can also be set via environment variables using the prefix SYNERGYDB_ with underscores replacing dots (e.g., SYNERGYDB_STORAGE_DATA_DIR).

config.yamlyaml
"color:#64748b"># SynergyDB Configuration
"color:#64748b"># /etc/synergydb/config.yaml

cluster:
  node_id: node-1
  advertise_addr: 10.0.1.10:9400
  peers:
    - 10.0.1.11:9400
    - 10.0.1.12:9400

"color:#64748b"># Protocol listeners
protocols:
  postgresql:
    enabled: true
    host: 0.0.0.0
    port: 5432
    max_connections: 500
    ssl: true
    ssl_cert: /etc/synergydb/certs/server.crt
    ssl_key: /etc/synergydb/certs/server.key
  mysql:
    enabled: true
    host: 0.0.0.0
    port: 3306
    max_connections: 500
    ssl: true
  mongodb:
    enabled: true
    host: 0.0.0.0
    port: 27017
    max_connections: 1000
    ssl: true
  redis:
    enabled: true
    host: 0.0.0.0
    port: 6379
    max_connections: 10000
    requirepass: ''
  bolt:
    enabled: true
    host: 0.0.0.0
    port: 7687
    max_connections: 500
  http:
    enabled: true
    host: 0.0.0.0
    port: 8080
    max_connections: 1000
  grpc:
    enabled: true
    host: 0.0.0.0
    port: 9090
    max_connections: 1000

"color:#64748b"># Vector index
vector:
  default_index: hnsw
  hot_threshold_secs: 3600
  warm_threshold_secs: 604800

"color:#64748b"># Storage engine
storage:
  engine: synergy  "color:#64748b"># synergy | rocksdb | hybrid
  data_dir: /var/lib/synergydb/data
  wal_dir: /var/lib/synergydb/wal
  max_wal_size: 4GB
  compaction_style: leveled
  bloom_filter_bits: 10
  block_cache_size: 8GB
  write_buffer_size: 256MB
  max_write_buffer_number: 4

"color:#64748b"># Replication
replication:
  mode: synchronous  "color:#64748b"># synchronous | asynchronous | quorum
  quorum_size: 2
  wal_shipping: true
  snapshot_interval: 6h
  max_lag_bytes: 67108864  "color:#64748b"># 64MB

"color:#64748b"># Security
security:
  authentication: scram-sha-256  "color:#64748b"># scram-sha-256 | ldap | oidc
  encryption_at_rest: true
  encryption_algorithm: AES-256-GCM
  master_key_provider: vault  "color:#64748b"># vault | aws-kms | gcp-kms | local
  vault_addr: https:"color:#64748b">//vault.internal:8200
  vault_path: secret/synergydb/master-key
  audit_log:
    enabled: true
    path: /var/log/synergydb/audit.log
    format: json
    include_queries: true
  tls:
    min_version: '1.3'
    cipher_suites:
      - TLS_AES_256_GCM_SHA384
      - TLS_CHACHA20_POLY1305_SHA256

"color:#64748b"># Observability
telemetry:
  metrics:
    enabled: true
    endpoint: /metrics
    port: 9090
  tracing:
    enabled: true
    exporter: otlp
    endpoint: http:"color:#64748b">//otel-collector:4317
  logging:
    level: info  "color:#64748b"># debug | info | warn | error
    format: json
    output: /var/log/synergydb/synergydb.log
Run synergydb config validate --config /etc/synergydb/config.yaml to check your configuration before starting the server. The validator catches type errors, port conflicts, and invalid TLS certificate paths.

Quick Start

Once SynergyDB is running, you can connect using the native client for any supported protocol. Every protocol accesses the same storage layer, so a table created through PostgreSQL is immediately queryable through MySQL, visible as a collection in MongoDB, traversable as a graph via Neo4j Bolt, and accessible as a key namespace in Redis. Vector and full-text indexes are available across all protocols.

Connect via PostgreSQL

terminalbash
psql -h localhost -p 5432 -U admin -d synergydb
psql sessionsql
CREATE TABLE users (
  id SERIAL PRIMARY KEY,
  name TEXT NOT NULL,
  email TEXT UNIQUE NOT NULL,
  created_at TIMESTAMP DEFAULT NOW()
);

INSERT INTO users (name, email) VALUES
  ('Alice Chen', 'alice@example.com'),
  ('Bob Martinez', 'bob@example.com'),
  ('Carol Johnson', 'carol@example.com');

SELECT * FROM users WHERE name LIKE 'A%';

Connect via MySQL

terminalbash
mysql -h 127.0.0.1 -P 3306 -u admin -p synergydb
mysql sessionsql
-- Query the same table created via PostgreSQL
SELECT id, name, email FROM users ORDER BY created_at;

-- MySQL-specific syntax works natively
INSERT INTO users (name, email) VALUES ('Dave Kim', 'dave@example.com');

SELECT COUNT(*) AS total_users FROM users;

Connect via MongoDB

terminalbash
mongosh "mongodb:">//admin:password@localhost:27017/synergydb"
mongosh sessionjavascript
"color:#64748b">// The "users" table is accessible as a collection
db.users.find({ name: /^A/ })

"color:#64748b">// Insert a document -- it appears as a row in SQL protocols
db.users.insertOne({
  name: "Eve Park",
  email: "eve@example.com",
  created_at: new Date()
})

"color:#64748b">// Aggregation pipeline
db.users.aggregate([
  { $group: { _id: null, count: { $sum: 1 } } }
])

Connect via Redis

terminalbash
redis-cli -h localhost -p 6379
redis-cli sessionbash
"color:#64748b"># Access user records as Redis hashes
HGETALL users:1

"color:#64748b"># Set a cache value
SET session:abc123 '{"user_id":1,"role":"admin"}' EX 3600

"color:#64748b"># Query keys by pattern
KEYS users:*

"color:#64748b"># Use Redis data structures alongside SQL tables
LPUSH notifications '{"type":"welcome","user_id":5}'
LRANGE notifications 0 -1
Data written via Redis SET and HSET commands is stored in the unified storage engine and can be queried via SQL using the redis_keys virtual table.

Protocol Guides

SynergyDB implements each wire protocol at the network layer, providing native compatibility with existing clients, drivers, and ORMs. Each protocol handler translates incoming queries into the unified internal query plan representation before executing against the shared storage engine.

PostgreSQL Protocol

SynergyDB implements PostgreSQL wire protocol v3 (PostgreSQL 15+ compatible). All standard PostgreSQL clients, drivers, and ORMs connect without modification.

Connection String

connection stringbash
postgresql:"color:#64748b">//user:password@host:5432/database?sslmode=require

"color:#64748b"># With all parameters
postgresql:"color:#64748b">//admin:s3cureP4ss@synergydb.example.com:5432/myapp?sslmode=verify-full&sslrootcert=/path/to/ca.crt&connect_timeout=10&application_name=myservice

Supported Features

Transactions (BEGIN / COMMIT / ROLLBACK)
Savepoints
Prepared statements
COPY IN / COPY OUT
LISTEN / NOTIFY
Advisory locks
JSON / JSONB operators
CTEs and window functions
Full-text search (tsvector)
Row-level security policies
Custom types and domains
PL/pgSQL functions

ORM Compatibility

The following ORMs and query builders have been tested and verified with SynergyDB's PostgreSQL endpoint:

ORM / DriverLanguageStatusNotes
PrismaTypeScriptFullUse postgresql provider. Migrations and introspection supported.
Drizzle ORMTypeScriptFullpg driver mode. Push and generate both work.
TypeORMTypeScriptFullpostgres driver type. All decorators supported.
SQLAlchemyPythonFullpsycopg2 and asyncpg dialects. Alembic migrations work.
ActiveRecordRubyFullpostgresql adapter. Rails migrations fully supported.
GORMGoFullUse postgres driver. AutoMigrate supported.
DieselRustFullpostgres feature flag. Compile-time query checking works.
JDBCJavaFullPostgreSQL JDBC driver 42.x. HikariCP connection pooling supported.
prisma/schema.prismajavascript
"color:#64748b">// Prisma schema for SynergyDB (PostgreSQL protocol)
datasource db {
  provider = "postgresql"
  url      = "postgresql:">//admin:password@localhost:5432/myapp"
}

generator client {
  provider = "prisma-client-js"
}

"color:#64748b">// Models work exactly as with native PostgreSQL
"color:#64748b">// SynergyDB handles schema introspection and migrations

MySQL Protocol

SynergyDB implements the MySQL wire protocol (MySQL 8.0 compatible), supporting the COM_QUERY, COM_STMT_PREPARE, COM_STMT_EXECUTE, and COM_STMT_CLOSE command types. Authentication uses caching_sha2_password by default.

Connection String

connection stringbash
mysql:"color:#64748b">//user:password@host:3306/database

"color:#64748b"># DSN format (Go, PHP)
user:password@tcp(host:3306)/database?tls=true&parseTime=true&multiStatements=true

"color:#64748b"># JDBC
jdbc:mysql:"color:#64748b">//host:3306/database?useSSL=true&serverTimezone=UTC

Supported Features

Transactions (InnoDB semantics)
Prepared statements (binary protocol)
Multi-statement execution
Stored procedures
Triggers
Views
JSON functions (JSON_EXTRACT, etc.)
Window functions
Common table expressions
Generated columns
Spatial data types
LOAD DATA INFILE

ORM Compatibility

ORM / DriverLanguageStatus
PrismaTypeScriptFull
Drizzle ORMTypeScriptFull
SequelizeJavaScriptFull
SQLAlchemyPythonFull
Django ORMPythonFull
Laravel EloquentPHPFull
GORMGoFull
HibernateJavaFull

MongoDB Protocol

SynergyDB speaks the MongoDB wire protocol (OP_MSG format, MongoDB 6.0+ compatible). It translates BSON documents into the internal columnar representation while preserving full document semantics including nested objects, arrays, and schemaless flexibility.

Connection String

connection stringbash
mongodb:"color:#64748b">//user:password@host:27017/database

"color:#64748b"># Replica set (SynergyDB cluster)
mongodb:"color:#64748b">//user:password@node1:27017,node2:27017,node3:27017/database?replicaSet=synergy-rs&authSource=admin

"color:#64748b"># With TLS
mongodb:"color:#64748b">//user:password@host:27017/database?tls=true&tlsCAFile=/path/to/ca.pem

Supported Features

CRUD operations (insert, find, update, delete)
Aggregation pipeline (all stages)
Change streams
GridFS
Bulk write operations
Text search indexes
Geospatial queries (2dsphere)
Transactions (multi-document)
Read/write concerns
Collations
Schema validation
TTL indexes

Driver Compatibility

Official MongoDB drivers for all major languages work without modification. This includes the Node.js driver (mongodb), PyMongo, the Java driver, Motor (async Python), the Go driver, and Mongoid (Ruby). Mongoose ODM is fully supported including schema definitions, middleware, and population.

example.jsjavascript
"color:#64748b">// Node.js MongoDB driver -- connects to SynergyDB without changes
const { MongoClient } = require('mongodb')

const client = new MongoClient('mongodb:"color:#64748b">//admin:password@localhost:27017')
await client.connect()

const db = client.db('myapp')
const users = db.collection('users')

"color:#64748b">// Aggregation pipeline with $lookup (joins across collections)
const results = await users.aggregate([
  { $match: { status: 'active' } },
  { $lookup: {
      from: 'orders',
      localField: '_id',
      foreignField: 'user_id',
      as: 'orders'
  }},
  { $addFields: { order_count: { $size: '$orders' } } },
  { $sort: { order_count: -1 } },
  { $limit: 10 }
]).toArray()

"color:#64748b">// Change streams for real-time updates
const changeStream = users.watch()
changeStream.on('change', (change) => {
  console.log('Change detected:', change.operationType)
})

Redis Protocol

SynergyDB implements the RESP3 protocol (Redis 7.0+ compatible). Unlike standalone Redis, data stored through the Redis protocol is persisted in the same durable storage engine used by SQL protocols, providing full ACID guarantees and replication.

Connection String

connection stringbash
redis:"color:#64748b">//user:password@host:6379/0

"color:#64748b"># With TLS
rediss:"color:#64748b">//user:password@host:6379/0

"color:#64748b"># Cluster mode (SynergyDB automatically handles slot distribution)
redis:"color:#64748b">//host:6379?cluster=true

Supported Commands

SynergyDB supports 200+ Redis commands across all major data types. The following command groups are fully implemented:

String commands (GET, SET, MGET, INCR, etc.)
Hash commands (HGET, HSET, HGETALL, etc.)
List commands (LPUSH, RPUSH, LRANGE, etc.)
Set commands (SADD, SMEMBERS, SINTER, etc.)
Sorted set commands (ZADD, ZRANGE, etc.)
Stream commands (XADD, XREAD, XRANGE)
Pub/Sub (SUBSCRIBE, PUBLISH)
Transactions (MULTI, EXEC, WATCH)
Scripting (EVAL, EVALSHA -- Lua 5.1)
Key expiration (EXPIRE, TTL, PEXPIRE)
Scan commands (SCAN, HSCAN, SSCAN)
HyperLogLog (PFADD, PFCOUNT, PFMERGE)

Data Type Mapping

When data is accessed across protocols, SynergyDB transparently maps between Redis data types and relational / document structures:

Redis TypeSQL RepresentationMongoDB Representation
StringTEXT column in redis_keys tableString field in redis_keys collection
HashRow in the corresponding table (fields map to columns)Document with fields as top-level keys
ListJSONB array in redis_keys tableArray field
SetJSONB array (deduplicated) in redis_keys tableArray field (deduplicated)
Sorted SetRows in redis_zsets table with score columnArray of {value, score} objects
StreamRows in redis_streams table with timestamp orderingCapped collection with _id as stream ID

Neo4j Bolt Protocol

SynergyDB implements Neo4j Bolt protocol versions 4.x and 5.x, providing wire-level compatibility with official Neo4j drivers. Connect from any Neo4j client to run Cypher queries against SynergyDB's graph data model. Graph data is stored natively using adjacency list indexes with BFS/DFS and shortest path support.

Connection String

connection stringbash
bolt:"color:#64748b">//user:password@host:7687

"color:#64748b"># With TLS
bolt+s:"color:#64748b">//user:password@host:7687

"color:#64748b"># Neo4j URI scheme (also supported)
neo4j:"color:#64748b">//user:password@host:7687

Supported Features

Cypher query execution
PackStream binary serialization
Transactions (BEGIN / COMMIT / ROLLBACK)
Result streaming (PULL / DISCARD)
Multi-database support
Bookmarks
GRAPH_MATCH pattern matching
Variable-length path traversal
Shortest path algorithms
Graph index (adjacency list)
BFS / DFS traversal
Bolt 4.x and 5.x support

Cypher Query Examples

cypher sessionsql
"color:#64748b">// Find direct connections
SELECT * FROM GRAPH_MATCH(
    (person:User {name: 'Alice'})-[:FOLLOWS]->(friend:User)
);

"color:#64748b">// Friends of friends (2 hops)
SELECT * FROM GRAPH_MATCH(
    (person:User {name: 'Alice'})-[:FOLLOWS*2]->(fof:User)
);

"color:#64748b">// Variable-length paths (1 to 5 hops)
SELECT * FROM GRAPH_MATCH(
    (a:User)-[:FOLLOWS*1..5]->(b:User)
)
WHERE a.id = 1;

"color:#64748b">// Shortest path
SELECT * FROM SHORTEST_PATH(
    (start:User {id: 1}),
    (end:User {id: 100}),
    [:FOLLOWS]
);

Driver Compatibility

Official Neo4j drivers for all major languages connect without modification, including the Python neo4j driver, JavaScript neo4j-driver, Java driver, Go driver, and .NET driver.

example.pyjavascript
"color:#64748b"># Python with neo4j driver
from neo4j import GraphDatabase

driver = GraphDatabase.driver("bolt:">//localhost:7687", auth=("admin", "password"))

with driver.session() as session:
    result = session.run("""
        MATCH (u:User {name: $name})-[:FOLLOWS]->(friend)
        RETURN friend.name AS name, friend.email AS email
    """, name="Alice")

    for record in result:
        print(f"{record['name']}: {record['email']}")

HTTP/REST API

SynergyDB provides a comprehensive REST API on port 8080 for database operations, including query execution, schema management, vector search, and an OpenAI-compatible embeddings endpoint. The API uses JSON for request/response payloads and supports streaming via NDJSON and Server-Sent Events.

Base URL

base URLbash
http:"color:#64748b">//localhost:8080/api/v1

Query Endpoint

SQL query via RESTbash
curl -X POST http:"color:#64748b">//localhost:8080/api/v1/query \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $API_KEY" \
  -d '{
    "query": "SELECT id, name FROM users WHERE created_at > $1",
    "params": ["2024-01-01"],
    "language": "sql"
  }'

Supported Features

SQL / Cypher / MQL query execution
GraphQL endpoint
OpenAI-compatible embeddings API
Vector similarity search
Schema management (CREATE / DROP / ALTER)
Batch operations
Streaming responses (NDJSON, SSE)
Health and status endpoints

gRPC API

SynergyDB provides a high-performance gRPC API on port 9090 using Protocol Buffers for efficient binary serialization. The gRPC API supports bidirectional streaming for real-time operations and automatic client generation for many languages.

Key Operations

Query execution (SQL, Cypher, MQL)
Streaming query results (RecordBatch)
Transaction management
Vector search (VectorSearchRequest)
Graph traversal (stream GraphNode)
Full-text search
Schema management
Bidirectional streaming (ExecutePipeline)
grpc_client.pyjavascript
"color:#64748b"># Python gRPC client example
import grpc
from synergydb.v1 import synergydb_pb2, synergydb_pb2_grpc

channel = grpc.insecure_channel('localhost:9090')
stub = synergydb_pb2_grpc.SynergyDBStub(channel)

"color:#64748b"># Execute a SQL query
response = stub.Query(synergydb_pb2.QueryRequest(
    query="SELECT * FROM users LIMIT 10",
    language="sql"
))

"color:#64748b"># Vector similarity search
results = stub.VectorSearch(synergydb_pb2.VectorSearchRequest(
    table="documents",
    column="embedding",
    vector=[0.1, 0.2, 0.3],
    top_k=10,
    metric="cosine"
))

Data Models

SynergyDB natively supports 5 data models in a single engine: relational (SQL), document (JSON/BSON), graph (Cypher), vector (HNSW similarity search), and time-series. This section covers the advanced query features for vector, graph, and full-text search, plus the native SynergyQL cross-model query language.

SynergyDB provides first-class VECTOR(n) type support with HNSW indexing for high-performance similarity search. Vectors are stored natively in the LSM-tree storage engine and support hybrid queries combining vector similarity with SQL filters.

Creating Vector Tables

vector schemasql
-- Table with vector column for embeddings
CREATE TABLE documents (
    id INTEGER PRIMARY KEY,
    title TEXT,
    content TEXT,
    embedding VECTOR(1536)  -- OpenAI embedding dimension
);

-- Create HNSW index with custom parameters
CREATE INDEX idx_docs_embedding ON documents
    USING HNSW (embedding vector_cosine_ops)
    WITH (m = 16, ef_construction = 200);

-- Parameters:
-- m: Number of connections per layer (default: 16)
-- ef_construction: Search width during construction (default: 200)
-- Higher values = better recall, slower insert

Similarity Search

vector queriessql
-- L2 distance search using <-> operator
SELECT id, title, embedding <-> '[0.1, 0.2, ...]' AS distance
FROM documents
ORDER BY embedding <-> '[0.1, 0.2, ...]'
LIMIT 10;

-- Cosine similarity search
SELECT id, title
FROM documents
ORDER BY COSINE_DISTANCE(embedding, '[0.1, 0.2, ...]')
LIMIT 10;

-- Hybrid search (vector + SQL filters)
SELECT id, title
FROM documents
WHERE category = 'tech'
  AND created_at > '2024-01-01'
ORDER BY embedding <-> $1
LIMIT 10;

-- Vector with metadata filtering
SELECT id, title, embedding <-> $1 AS score
FROM documents
WHERE tags @> ARRAY['ai', 'ml']
ORDER BY score
LIMIT 10;
HNSW indexes provide sub-millisecond search (0.8ms p50) on datasets with millions of vectors. Use ef_construction = 200 and m = 16 for a good balance between recall and insert speed.

Graph Queries

SynergyDB supports Cypher-compatible graph queries with native adjacency list indexes. Graph data is stored alongside relational and document data in the same storage engine, enabling cross-model queries that combine graph traversal with SQL joins and vector similarity.

Graph Pattern Matching

graph queriessql
-- Find direct connections
SELECT * FROM GRAPH_MATCH(
    (person:User {name: 'Alice'})-[:FOLLOWS]->(friend:User)
);

-- Friends of friends
SELECT * FROM GRAPH_MATCH(
    (person:User {name: 'Alice'})-[:FOLLOWS*2]->(fof:User)
);

-- Variable-length paths (1 to 5 hops)
SELECT * FROM GRAPH_MATCH(
    (a:User)-[:FOLLOWS*1..5]->(b:User)
)
WHERE a.id = 1;

-- Shortest path between two nodes
SELECT * FROM SHORTEST_PATH(
    (start:User {id: 1}),
    (end:User {id: 100}),
    [:FOLLOWS]
);

Graph Indexes

graph indexsql
-- Create a graph index for relationship traversal
CREATE INDEX idx_follows ON follows
    USING GRAPH (from_user, to_user);

-- The graph index enables efficient BFS/DFS traversal
-- and shortest path computation across millions of edges

SynergyDB includes Tantivy-powered full-text search with BM25 scoring. Full-text indexes support phrase search, fuzzy matching, boolean operators, and relevance ranking. Indexes are maintained automatically and available across all protocols.

full-text searchsql
-- Create a full-text search index
CREATE INDEX idx_docs_content ON documents
    USING FULLTEXT (content);

-- Basic full-text search
SELECT id, title, content
FROM documents
WHERE MATCH(content, 'machine learning');

-- Full-text with BM25 ranking
SELECT id, title, SCORE() as relevance
FROM documents
WHERE MATCH(content, 'neural networks')
ORDER BY relevance DESC
LIMIT 10;

-- Phrase search
SELECT * FROM documents
WHERE MATCH(content, '"deep learning"');

-- Fuzzy search (allow 2 character edits)
SELECT * FROM documents
WHERE MATCH(content, 'neurl~2');

-- Boolean operators
SELECT * FROM documents
WHERE MATCH(content, 'machine AND learning NOT python');

SynergyQL

SynergyQL is SynergyDB's native unified query language that enables cross-model queries spanning relational, document, graph, and vector data in a single statement. It extends SQL with GRAPH_MATCH clauses, vector operators, and document path expressions.

synergyql cross-model querysql
-- Cross-model query: combine graph traversal with vector similarity
SELECT
    friend.name,
    doc.title,
    COSINE_DISTANCE(doc.embedding, $query_vector) AS similarity
FROM GRAPH_MATCH(
    (me:User {id: $user_id})-[:FOLLOWS]->(friend:User)
)
JOIN documents doc ON doc.author_id = friend.id
WHERE MATCH(doc.content, 'machine learning')
ORDER BY similarity ASC
LIMIT 10;

-- Query that spans relational + document + vector
SELECT
    u.name,
    u.metadata->>'department' AS dept,
    d.title,
    d.embedding <-> $query AS score
FROM users u
JOIN documents d ON d.author_id = u.id
WHERE u.metadata @> '{"role": "researcher"}'
ORDER BY score
LIMIT 5;
SynergyQL is a superset of standard SQL. All existing SQL queries work without modification, while the extended syntax enables powerful cross-model operations unique to SynergyDB.

AI & Embeddings

SynergyDB includes built-in support for generating and managing vector embeddings through OpenAI and HuggingFace APIs. The embeddings API is OpenAI-compatible, making it easy to integrate with LangChain, LlamaIndex, and other AI frameworks.

Embedding API

SynergyDB exposes an OpenAI-compatible embeddings endpoint at /v1/embeddings on the HTTP port (8080). This allows any OpenAI SDK or compatible client to generate embeddings through SynergyDB.

Configuration

terminalbash
"color:#64748b"># Set your API key for the embedding provider
export OPENAI_API_KEY="sk-..."

"color:#64748b"># Start SynergyDB -- it will automatically use OpenAI for embeddings
synergydb start --config /etc/synergydb/config.yaml

Generate Embeddings via HTTP

requestbash
curl -X POST http:"color:#64748b">//localhost:8080/v1/embeddings \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Your text here",
    "model": "text-embedding-3-small"
  }'

Supported Providers

ProviderModelsNotes
OpenAItext-embedding-3-small, text-embedding-3-large, ada-002Requires OPENAI_API_KEY
HuggingFacesentence-transformers and other modelsRequires HUGGINGFACE_API_KEY
MockRandom vectors for testingEnabled by default if no API key

LangChain & LlamaIndex Integration

SynergyDB's OpenAI-compatible API means it integrates seamlessly with LangChain, LlamaIndex, and other AI orchestration frameworks. Point the framework's embedding client at SynergyDB's HTTP endpoint.

LangChain

langchain_example.pyjavascript
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import PGVector

"color:#64748b"># Use SynergyDB as the embedding provider
embeddings = OpenAIEmbeddings(
    model="text-embedding-3-small",
    openai_api_base="http:">//localhost:8080/v1",
    openai_api_key="unused"  "color:#64748b"># Not validated by SynergyDB
)

"color:#64748b"># Use SynergyDB as the vector store (PostgreSQL protocol)
vectorstore = PGVector(
    connection_string="postgresql:">//admin:password@localhost:5432/myapp",
    embedding_function=embeddings,
    collection_name="documents"
)

"color:#64748b"># Similarity search
results = vectorstore.similarity_search("machine learning papers", k=5)

LlamaIndex

llamaindex_example.pyjavascript
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.postgres import PGVectorStore

"color:#64748b"># Connect to SynergyDB via PostgreSQL protocol
vector_store = PGVectorStore.from_params(
    host="localhost",
    port="5432",
    user="admin",
    password="password",
    database="myapp",
    table_name="documents",
    embed_dim=1536
)

"color:#64748b"># Load and index documents
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(
    documents, vector_store=vector_store
)

"color:#64748b"># Query
query_engine = index.as_query_engine()
response = query_engine.query("What are the key findings?")

Migration Guides

SynergyDB provides built-in tooling for zero-downtime migration from existing database instances. The migration engine supports live replication, allowing you to run both systems in parallel until you are confident in the cutover. Each migration follows the same six-step process tailored to the source database.

Migrate from PostgreSQL

Step 1: Pre-migration Checklist

1Verify SynergyDB is running with PostgreSQL protocol enabled
2Ensure the source PostgreSQL server has logical replication enabled (wal_level = logical)
3Confirm network connectivity between source and SynergyDB on port 5432
4Verify sufficient disk space (at least 1.5x source database size)
5Create a dedicated replication user on the source with REPLICATION privilege
6Document all extensions in use -- verify compatibility on the SynergyDB extensions page

Step 2: Configure Source Connection

migration-pg.yamlyaml
migration:
  source:
    type: postgresql
    host: source-pg.example.com
    port: 5432
    database: production
    user: replication_user
    password: {PGPASSWORD}
    ssl_mode: require
  target:
    database: production
  options:
    include_schemas:
      - public
      - app
    exclude_tables:
      - pg_stat_*
      - _prisma_migrations
    parallel_workers: 4
    batch_size: 10000

Step 3: Start Live Replication

terminalbash
"color:#64748b"># Start the migration (initial snapshot + continuous replication)
synergydb migrate start --config migration-pg.yaml

"color:#64748b"># Monitor progress
synergydb migrate status --migration-id pg-20260210-001

The migration engine performs an initial consistent snapshot of all tables, then switches to streaming logical replication to capture ongoing changes. The migrate status command shows current lag, rows transferred, and estimated time remaining.

Step 4: Validate Data Consistency

terminalbash
"color:#64748b"># Run row-count and checksum validation across all tables
synergydb migrate validate --migration-id pg-20260210-001

"color:#64748b"># Sample output:
"color:#64748b"># Table              Source Rows  Target Rows  Checksum Match
"color:#64748b"># public.users       1,284,531    1,284,531    OK
"color:#64748b"># public.orders      8,412,003    8,412,003    OK
"color:#64748b"># public.products    42,891       42,891       OK
"color:#64748b"># app.sessions       312,445      312,445      OK
"color:#64748b"># Validation PASSED: 4/4 tables consistent

Step 5: Redirect Application Traffic

Update your application connection strings to point to SynergyDB. Because the PostgreSQL wire protocol is fully compatible, no application code changes are required. We recommend a phased rollout:

terminalbash
"color:#64748b"># Point read traffic to SynergyDB first (canary)
"color:#64748b"># Update your load balancer or DNS to route 10% of reads

"color:#64748b"># Once validated, route all reads
"color:#64748b"># Finally, redirect writes and stop replication
synergydb migrate cutover --migration-id pg-20260210-001

Step 6: Decommission Source

After running on SynergyDB for a validation period (we recommend 7 days minimum), stop the replication slot on the source and decommission:

terminalbash
"color:#64748b"># Finalize the migration (cleans up replication slot on source)
synergydb migrate finalize --migration-id pg-20260210-001

"color:#64748b"># Verify no remaining lag
synergydb migrate status --migration-id pg-20260210-001
"color:#64748b"># Status: COMPLETED

Migrate from MySQL

Step 1: Pre-migration Checklist

1Verify SynergyDB is running with MySQL protocol enabled
2Ensure the source MySQL server has binary logging enabled (log_bin = ON, binlog_format = ROW)
3Confirm GTID mode is enabled for consistent replication (gtid_mode = ON)
4Verify network connectivity between source and SynergyDB on port 3306
5Create a replication user with REPLICATION SLAVE and REPLICATION CLIENT privileges
6Verify sufficient disk space (at least 1.5x source database size)

Step 2: Configure Source Connection

migration-mysql.yamlyaml
migration:
  source:
    type: mysql
    host: source-mysql.example.com
    port: 3306
    database: production
    user: replication_user
    password: {MYSQL_PASSWORD}
    ssl_mode: required
  target:
    database: production
  options:
    include_databases:
      - production
    exclude_tables:
      - _migrations
      - sessions
    parallel_workers: 4
    batch_size: 10000
    charset: utf8mb4

Step 3: Start Live Replication

terminalbash
synergydb migrate start --config migration-mysql.yaml

"color:#64748b"># The engine reads the binlog from the current GTID position
synergydb migrate status --migration-id mysql-20260210-001

Step 4: Validate Data Consistency

terminalbash
synergydb migrate validate --migration-id mysql-20260210-001

"color:#64748b"># Validates row counts, checksums, and schema compatibility
"color:#64748b"># Reports any data type mapping differences

Step 5: Redirect Application Traffic

Update application connection strings. MySQL-compatible applications connect to SynergyDB on port 3306 without code changes. The same phased rollout approach applies: route reads first, then writes.

terminalbash
synergydb migrate cutover --migration-id mysql-20260210-001

Step 6: Decommission Source

terminalbash
synergydb migrate finalize --migration-id mysql-20260210-001

Migrate from MongoDB

Step 1: Pre-migration Checklist

1Verify SynergyDB is running with MongoDB protocol enabled
2Ensure the source MongoDB deployment is a replica set (change streams require oplog)
3Confirm network connectivity between source and SynergyDB on port 27017
4Verify sufficient disk space (at least 2x source database size due to format conversion)
5Create a dedicated migration user with readAnyDatabase and clusterMonitor roles
6Inventory all indexes -- SynergyDB recreates them automatically during migration

Step 2: Configure Source Connection

migration-mongo.yamlyaml
migration:
  source:
    type: mongodb
    uri: mongodb:"color:#64748b">//repl_user:password@mongo1:27017,mongo2:27017,mongo3:27017/?replicaSet=rs0&authSource=admin
  target:
    database: production
  options:
    include_databases:
      - production
      - analytics
    exclude_collections:
      - system.profile
      - tmp_*
    parallel_workers: 8
    batch_size: 5000
    preserve_ids: true  "color:#64748b"># Keep original _id values

Step 3: Start Live Replication

terminalbash
synergydb migrate start --config migration-mongo.yaml

"color:#64748b"># Uses change streams for continuous replication
synergydb migrate status --migration-id mongo-20260210-001

Step 4: Validate Data Consistency

terminalbash
synergydb migrate validate --migration-id mongo-20260210-001

"color:#64748b"># Validates document counts, samples documents for field-level comparison,
"color:#64748b"># and verifies all indexes have been recreated

Step 5: Redirect Application Traffic

Update your MongoDB connection string to point to SynergyDB. All MongoDB drivers and ODMs (Mongoose, Mongoid, Motor) connect without changes. The migration engine continues capturing changes from the source until cutover.

terminalbash
synergydb migrate cutover --migration-id mongo-20260210-001

Step 6: Decommission Source

terminalbash
synergydb migrate finalize --migration-id mongo-20260210-001

"color:#64748b"># Closes the change stream cursor and marks migration complete
After cutover, the source database should be kept running in read-only mode for at least 7 days as a rollback safety net. Only decommission after validating all application functionality against SynergyDB.

Migrate from SQLite

SynergyDB supports direct import from SQLite database files. Because SQLite does not support live replication, the migration is a one-time import operation that copies all tables, indexes, and data into SynergyDB.

Step 1: Configure Source

migration-sqlite.yamlyaml
migration:
  source:
    type: sqlite
    path: /path/to/database.sqlite
  target:
    database: production
  options:
    include_tables:
      - users
      - orders
      - products
    parallel_workers: 4
    batch_size: 10000

Step 2: Run Import

terminalbash
"color:#64748b"># Import all tables from SQLite
synergydb migrate start --config migration-sqlite.yaml

"color:#64748b"># Monitor progress
synergydb migrate status --migration-id sqlite-20260210-001

Step 3: Validate

terminalbash
synergydb migrate validate --migration-id sqlite-20260210-001

"color:#64748b"># Validates row counts and schema mapping
"color:#64748b"># SQLite types are mapped to SynergyDB types automatically:
"color:#64748b"># INTEGER -> INTEGER, TEXT -> TEXT, REAL -> FLOAT, BLOB -> BYTES
SQLite migration is a snapshot import -- there is no continuous replication. The import preserves ROWID values and recreates all indexes. Foreign key constraints are translated to SynergyDB equivalents.

Configuration Reference

Complete reference of all configuration options available in config.yaml. Options marked with an asterisk (*) require a server restart to take effect; all others can be applied with synergydb config reload.

Cluster Settings

OptionTypeDefaultDescription
cluster.node_idstringautoUnique identifier for this node. Auto-generated from hostname if not set.
cluster.advertise_addrstring0.0.0.0:9400Address and port this node advertises to peers for cluster communication.
cluster.peersstring[][]List of peer addresses for cluster formation. Not required for single-node.
cluster.join_timeoutduration30sMaximum time to wait for cluster join before starting in standalone mode.

Protocol Settings

OptionTypeDefaultDescription
protocols.postgresql.enabledbooltrueEnable the PostgreSQL wire protocol listener.
protocols.postgresql.port *int5432TCP port for PostgreSQL connections.
protocols.postgresql.max_connectionsint500Maximum concurrent PostgreSQL connections.
protocols.postgresql.sslbooltrueRequire TLS for PostgreSQL connections.
protocols.mysql.enabledbooltrueEnable the MySQL wire protocol listener.
protocols.mysql.port *int3306TCP port for MySQL connections.
protocols.mysql.max_connectionsint500Maximum concurrent MySQL connections.
protocols.mongodb.enabledbooltrueEnable the MongoDB wire protocol listener.
protocols.mongodb.port *int27017TCP port for MongoDB connections.
protocols.mongodb.max_connectionsint1000Maximum concurrent MongoDB connections.
protocols.redis.enabledbooltrueEnable the Redis protocol listener.
protocols.redis.port *int6379TCP port for Redis connections.
protocols.redis.max_connectionsint10000Maximum concurrent Redis connections.
protocols.redis.requirepassstring""Password for Redis AUTH command. Empty disables AUTH.
protocols.bolt.enabledbooltrueEnable the Neo4j Bolt protocol listener.
protocols.bolt.port *int7687TCP port for Neo4j Bolt connections.
protocols.bolt.max_connectionsint500Maximum concurrent Bolt connections.
protocols.http.enabledbooltrueEnable the HTTP/REST API listener.
protocols.http.port *int8080TCP port for HTTP/REST and GraphQL.
protocols.grpc.enabledbooltrueEnable the gRPC API listener.
protocols.grpc.port *int9090TCP port for gRPC connections.

Vector Settings

OptionTypeDefaultDescription
vector.default_indexstringhnswDefault vector index type for VECTOR columns.
vector.hot_threshold_secsint3600Seconds before vectors move from hot (HNSW) to warm tier.
vector.warm_threshold_secsint604800Seconds before vectors move from warm to cold tier (IVF-PQ).

Storage Settings

OptionTypeDefaultDescription
storage.engine *stringsynergyStorage engine backend: synergy, rocksdb, or hybrid.
storage.data_dir *string/var/lib/synergydb/dataDirectory for data files. Must exist and be writable.
storage.wal_dir *string/var/lib/synergydb/walDirectory for write-ahead log. Use a separate disk for best performance.
storage.max_wal_sizestring4GBMaximum WAL size before triggering a checkpoint.
storage.compaction_stylestringleveledCompaction strategy: leveled, universal, or fifo.
storage.bloom_filter_bitsint10Bits per key for bloom filters. Higher values reduce false positives.
storage.block_cache_sizestring8GBSize of the in-memory block cache for data reads.
storage.write_buffer_sizestring256MBSize of each memtable write buffer.
storage.max_write_buffer_numberint4Maximum number of memtable write buffers before stalling writes.

Replication Settings

OptionTypeDefaultDescription
replication.modestringsynchronousReplication mode: synchronous, asynchronous, or quorum.
replication.quorum_sizeint2Number of nodes that must acknowledge a write in quorum mode.
replication.wal_shippingbooltrueShip WAL segments to replicas for point-in-time recovery.
replication.snapshot_intervalduration6hInterval between automated base backups / snapshots.
replication.max_lag_bytesint67108864Maximum replication lag in bytes before alerting. Default 64 MB.

Security Settings

OptionTypeDefaultDescription
security.authenticationstringscram-sha-256Authentication mechanism: scram-sha-256, ldap, or oidc.
security.encryption_at_restbooltrueEnable transparent data encryption at rest.
security.encryption_algorithmstringAES-256-GCMEncryption algorithm for data at rest.
security.master_key_providerstringvaultKey management system: vault, aws-kms, gcp-kms, or local.
security.audit_log.enabledbooltrueEnable audit logging of all authentication and query events.
security.audit_log.include_queriesbooltrueInclude full SQL / command text in audit log entries.
security.tls.min_versionstring1.3Minimum TLS version. Must be 1.2 or 1.3.

Telemetry Settings

OptionTypeDefaultDescription
telemetry.metrics.enabledbooltrueExpose Prometheus-compatible metrics endpoint.
telemetry.metrics.portint9090Port for the metrics HTTP endpoint.
telemetry.tracing.enabledbooltrueEnable distributed tracing via OpenTelemetry.
telemetry.tracing.exporterstringotlpTrace exporter: otlp, jaeger, or zipkin.
telemetry.logging.levelstringinfoLog level: debug, info, warn, or error.
telemetry.logging.formatstringjsonLog format: json or text.

API Reference

SynergyDB exposes a REST management API on port 9090 (configurable via telemetry.metrics.port). All endpoints require a valid API key passed in the Authorization header using the Bearer scheme. Responses use JSON.

GET/api/v1/status

Returns the current health status of the SynergyDB instance, including uptime, version, cluster membership, and resource utilization.

requestbash
curl -H "Authorization: Bearer $API_KEY" \
  https:"color:#64748b">//synergydb.example.com:9090/api/v1/status
responsejson
{
  "status": "healthy",
  "version": "0.1.0",
  "uptime_seconds": 864231,
  "node_id": "node-1",
  "cluster": {
    "state": "leader",
    "peers": 2,
    "healthy_peers": 2
  },
  "protocols": {
    "postgresql": { "state": "listening", "connections": 142 },
    "mysql": { "state": "listening", "connections": 89 },
    "mongodb": { "state": "listening", "connections": 203 },
    "redis": { "state": "listening", "connections": 1847 }
  },
  "storage": {
    "data_size_bytes": 107374182400,
    "wal_size_bytes": 2147483648,
    "disk_usage_percent": 34.2
  }
}
GET/api/v1/protocols

Returns detailed information about each protocol listener, including configuration, connection statistics, and throughput metrics.

requestbash
curl -H "Authorization: Bearer $API_KEY" \
  https:"color:#64748b">//synergydb.example.com:9090/api/v1/protocols
responsejson
{
  "protocols": [
    {
      "name": "postgresql",
      "version": "15.0",
      "port": 5432,
      "ssl": true,
      "max_connections": 500,
      "active_connections": 142,
      "queries_per_second": 3847.2,
      "avg_latency_ms": 1.23
    },
    {
      "name": "mysql",
      "version": "8.0",
      "port": 3306,
      "ssl": true,
      "max_connections": 500,
      "active_connections": 89,
      "queries_per_second": 2104.8,
      "avg_latency_ms": 1.45
    },
    {
      "name": "mongodb",
      "version": "6.0",
      "port": 27017,
      "ssl": true,
      "max_connections": 1000,
      "active_connections": 203,
      "operations_per_second": 8921.5,
      "avg_latency_ms": 0.87
    },
    {
      "name": "redis",
      "version": "7.0",
      "port": 6379,
      "ssl": false,
      "max_connections": 10000,
      "active_connections": 1847,
      "commands_per_second": 124503.1,
      "avg_latency_ms": 0.12
    }
  ]
}
POST/api/v1/migrations

Starts a new migration job. Accepts a migration configuration as the request body and returns a migration ID for tracking progress.

requestbash
curl -X POST \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "source": {
      "type": "postgresql",
      "host": "source-pg.example.com",
      "port": 5432,
      "database": "production",
      "user": "replication_user",
      "password": "secret"
    },
    "options": {
      "parallel_workers": 4,
      "batch_size": 10000,
      "include_schemas": ["public"]
    }
  }' \
  https:"color:#64748b">//synergydb.example.com:9090/api/v1/migrations
responsejson
{
  "migration_id": "pg-20260210-001",
  "status": "initializing",
  "source_type": "postgresql",
  "created_at": "2026-02-10T14:30:00Z",
  "estimated_rows": 12847320,
  "tables_discovered": 24
}
GET/api/v1/metrics

Returns Prometheus-compatible metrics in text exposition format. This endpoint is typically scraped by Prometheus or a compatible agent. Metrics include per-protocol connection counts, query latency histograms, storage utilization, replication lag, and cache hit rates.

requestbash
curl -H "Authorization: Bearer $API_KEY" \
  https:"color:#64748b">//synergydb.example.com:9090/api/v1/metrics
response (prometheus text format)bash
"color:#64748b"># HELP synergydb_connections_active Current active connections per protocol
"color:#64748b"># TYPE synergydb_connections_active gauge
synergydb_connections_active{protocol="postgresql"} 142
synergydb_connections_active{protocol="mysql"} 89
synergydb_connections_active{protocol="mongodb"} 203
synergydb_connections_active{protocol="redis"} 1847

"color:#64748b"># HELP synergydb_query_duration_seconds Query latency histogram
"color:#64748b"># TYPE synergydb_query_duration_seconds histogram
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="0.001"} 48291
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="0.01"} 89412
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="0.1"} 91003
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="1"} 91204
synergydb_query_duration_seconds_bucket{protocol="postgresql",le="+Inf"} 91210

"color:#64748b"># HELP synergydb_storage_bytes Total storage size in bytes
"color:#64748b"># TYPE synergydb_storage_bytes gauge
synergydb_storage_bytes{type="data"} 107374182400
synergydb_storage_bytes{type="wal"} 2147483648

"color:#64748b"># HELP synergydb_replication_lag_bytes Replication lag in bytes
"color:#64748b"># TYPE synergydb_replication_lag_bytes gauge
synergydb_replication_lag_bytes{peer="node-2"} 4096
synergydb_replication_lag_bytes{peer="node-3"} 8192

"color:#64748b"># HELP synergydb_cache_hit_ratio Block cache hit ratio
"color:#64748b"># TYPE synergydb_cache_hit_ratio gauge
synergydb_cache_hit_ratio 0.9847

Security & Compliance

SynergyDB is designed from the ground up for regulated environments. Every layer of the stack -- from network transport to data at rest -- implements defense-in-depth security controls. This section covers encryption, access control, audit logging, and compliance certifications.

Encryption at Rest (AES-256)

All data files, WAL segments, and snapshots are encrypted using AES-256-GCM authenticated encryption. The data encryption key (DEK) is itself encrypted by a master key managed through your preferred KMS provider (HashiCorp Vault, AWS KMS, or GCP Cloud KMS). Key rotation is performed online without downtime using envelope encryption.

config.yaml (encryption)yaml
security:
  encryption_at_rest: true
  encryption_algorithm: AES-256-GCM
  master_key_provider: vault
  vault_addr: https:"color:#64748b">//vault.internal:8200
  vault_path: secret/synergydb/master-key
  key_rotation_interval: 90d  "color:#64748b"># Rotate DEK every 90 days

Encryption in Transit (TLS 1.3)

All client-facing protocol listeners and internal cluster communication channels enforce TLS 1.3 by default. SynergyDB supports automatic certificate renewal via ACME (Let's Encrypt) or manual certificate provisioning.

config.yaml (TLS)yaml
security:
  tls:
    min_version: '1.3'
    cipher_suites:
      - TLS_AES_256_GCM_SHA384
      - TLS_CHACHA20_POLY1305_SHA256
    cert_file: /etc/synergydb/certs/server.crt
    key_file: /etc/synergydb/certs/server.key
    ca_file: /etc/synergydb/certs/ca.crt
    client_auth: require  "color:#64748b"># mutual TLS

Row-Level Security

SynergyDB supports row-level security (RLS) policies that restrict which rows a given user or role can access. RLS policies are defined using SQL and are enforced transparently across all protocols -- a policy defined via PostgreSQL is also enforced for queries arriving through MySQL, MongoDB, and Redis.

RLS examplesql
-- Enable row-level security on the orders table
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;

-- Users can only see their own orders
CREATE POLICY user_isolation ON orders
  USING (user_id = current_user_id());

-- Admins can see all orders
CREATE POLICY admin_access ON orders
  TO admin_role
  USING (true);

-- The policy applies to all protocols:
-- PostgreSQL: SELECT * FROM orders (filtered by policy)
-- MySQL: SELECT * FROM orders (filtered by policy)
-- MongoDB: db.orders.find() (filtered by policy)
-- Redis: HGETALL orders:* (filtered by policy)

Audit Logging

Every authentication event, DDL statement, and data access operation is logged to a tamper-evident audit log. The audit log captures the user identity, source IP, protocol used, full query text, affected rows, and timestamp. Logs can be shipped to SIEM systems via syslog, Kafka, or S3.

audit log entryjson
{
  "timestamp": "2026-02-10T14:32:18.445Z",
  "event_type": "query",
  "user": "app_service",
  "source_ip": "10.0.2.15",
  "protocol": "postgresql",
  "database": "production",
  "query": "SELECT * FROM users WHERE id = $1",
  "params": ["42"],
  "rows_affected": 1,
  "duration_ms": 0.82,
  "status": "success"
}

Role-Based Access Control (RBAC)

SynergyDB implements a unified RBAC system that spans all protocols. Roles and permissions defined through any protocol are enforced across all others. The permission model supports database-level, schema-level, table-level, and column-level grants.

RBAC configurationsql
-- Create roles
CREATE ROLE readonly;
CREATE ROLE readwrite;
CREATE ROLE admin_role;

-- Grant permissions
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO readwrite;
GRANT ALL PRIVILEGES ON DATABASE production TO admin_role;

-- Assign users to roles
GRANT readonly TO analyst_user;
GRANT readwrite TO app_service;
GRANT admin_role TO dba_user;

-- Column-level security
GRANT SELECT (id, name, email) ON users TO readonly;
-- readonly cannot access: users.ssn, users.salary

HIPAA Compliance Checklist

Encryption at rest enabled (AES-256-GCM)
security.encryption_at_rest: true
TLS 1.3 enforced for all connections
security.tls.min_version: '1.3'
Audit logging enabled with query text
security.audit_log.enabled: true
Access controls configured (RBAC)
CREATE ROLE / GRANT statements
Automatic session timeout (15 min idle)
protocols.*.idle_timeout: 15m
Backup encryption enabled
replication.snapshot_encryption: true
Network segmentation (private VPC)
Infrastructure configuration
BAA signed with Synergy Technologies
Contact sales@synergydb.io

SOC 2 Type II

SynergyDB Cloud (the managed offering) holds SOC 2 Type II certification covering the Security, Availability, and Confidentiality trust service criteria. The audit report is available upon request under NDA. Self-hosted deployments can leverage SynergyDB's built-in security controls to satisfy SOC 2 requirements within your own infrastructure audit scope. Key controls include immutable audit logs, automated access reviews, encrypted backups with tested restore procedures, and continuous monitoring via the metrics API.

Performance Tuning

SynergyDB ships with sensible defaults, but production workloads benefit from tuning based on your hardware profile and access patterns. This section covers the four most impactful tuning areas.

Memory Allocation

SynergyDB uses memory for the block cache (read amplification), write buffers (write throughput), and per-connection state. The general rule of thumb is to allocate 60-70% of available RAM to the block cache and 10-15% to write buffers, leaving the rest for OS page cache and connection overhead.

config.yaml (memory tuning)yaml
"color:#64748b"># For a 64 GB server
storage:
  block_cache_size: 40GB            "color:#64748b"># 62.5% of RAM -- primary read cache
  write_buffer_size: 512MB          "color:#64748b"># Size of each memtable
  max_write_buffer_number: 6        "color:#64748b"># Up to 3 GB total write buffers
  max_open_files: 10000             "color:#64748b"># OS file descriptor limit

"color:#64748b"># Per-protocol connection memory
protocols:
  postgresql:
    work_mem: 64MB                  "color:#64748b"># Memory for sorts and hash joins per query
    maintenance_work_mem: 512MB     "color:#64748b"># Memory for VACUUM, CREATE INDEX
  mysql:
    sort_buffer_size: 64MB
    join_buffer_size: 64MB
  mongodb:
    wired_tiger_cache_size: 8GB     "color:#64748b"># Additional cache for document workloads
  redis:
    maxmemory: 4GB                  "color:#64748b"># Limit for Redis key-value data
    maxmemory_policy: allkeys-lru   "color:#64748b"># Eviction policy when limit is reached
Run synergydb benchmark --duration 60s to profile your workload and get personalized tuning recommendations. The benchmark tool analyzes read/write ratios, key sizes, and query complexity to suggest optimal settings.

Connection Pooling

SynergyDB includes a built-in connection pooler for the PostgreSQL and MySQL protocols, eliminating the need for external tools like PgBouncer or ProxySQL. The pooler supports transaction-mode and session-mode pooling.

config.yaml (connection pooling)yaml
protocols:
  postgresql:
    pooling:
      enabled: true
      mode: transaction          "color:#64748b"># transaction | session
      pool_size: 100             "color:#64748b"># Backend connections to storage engine
      max_client_connections: 5000  "color:#64748b"># Client-facing connection limit
      idle_timeout: 300s         "color:#64748b"># Close idle backend connections after 5 min
      connection_lifetime: 3600s "color:#64748b"># Recycle backend connections after 1 hour
  mysql:
    pooling:
      enabled: true
      mode: transaction
      pool_size: 100
      max_client_connections: 5000
      idle_timeout: 300s

Transaction-mode pooling multiplexes client connections onto a smaller pool of backend connections. A client holds a backend connection only for the duration of a transaction, allowing thousands of clients to share a much smaller pool. This is the recommended mode for stateless web applications.

Index Optimization

SynergyDB unifies indexing across all protocols. An index created through PostgreSQL accelerates queries arriving from any protocol. Use the built-in index advisor to identify missing indexes and redundant ones.

index managementsql
-- View the index advisor recommendations
SELECT * FROM synergydb_index_advisor();

-- Sample output:
-- recommendation | table     | columns         | estimated_speedup
-- CREATE INDEX   | orders    | (user_id, date) | 47x
-- CREATE INDEX   | products  | (category_id)   | 12x
-- DROP INDEX     | users     | idx_users_tmp   | saves 2.1 GB

-- Create a concurrent index (no table lock)
CREATE INDEX CONCURRENTLY idx_orders_user_date
  ON orders (user_id, created_at DESC);

-- Partial index for hot data
CREATE INDEX idx_orders_active
  ON orders (status, priority)
  WHERE status IN ('pending', 'processing');

-- Covering index to avoid table lookups
CREATE INDEX idx_users_email_name
  ON users (email) INCLUDE (name, created_at);

-- Monitor index usage
SELECT index_name, idx_scan, idx_tup_read, idx_tup_fetch, size_bytes
FROM synergydb_index_stats
ORDER BY idx_scan DESC;

Query Cache Configuration

SynergyDB includes an intelligent query result cache that automatically invalidates entries when underlying data changes. The cache operates at the unified storage layer, so a cached result from a PostgreSQL query also benefits identical queries arriving via MySQL.

config.yaml (query cache)yaml
storage:
  query_cache:
    enabled: true
    max_size: 2GB                   "color:#64748b"># Maximum memory for cached results
    max_entry_size: 16MB            "color:#64748b"># Skip caching results larger than this
    ttl: 300s                       "color:#64748b"># Default TTL for cache entries
    invalidation: immediate         "color:#64748b"># immediate | lazy
    exclude_patterns:
      - "SELECT.*NOW()"            "color:#64748b"># Never cache time-dependent queries
      - "SELECT.*RANDOM()"        "color:#64748b"># Never cache non-deterministic queries

    "color:#64748b"># Per-protocol cache hints
    postgresql:
      enable_prepared_cache: true   "color:#64748b"># Cache prepared statement plans
      plan_cache_size: 500          "color:#64748b"># Max cached plans per connection
    redis:
      passthrough: true             "color:#64748b"># Redis GET/SET bypass the query cache
                                     "color:#64748b"># (Redis protocol has its own semantics)

You can control caching per-query using SQL hints:

cache hintssql
-- Force cache bypass for a specific query
SELECT /*+ NO_CACHE */ * FROM orders WHERE status = 'pending';

-- Set a custom TTL for a specific query
SELECT /*+ CACHE_TTL(60) */ count(*) FROM users;

-- View cache statistics
SELECT hit_count, miss_count, eviction_count,
       hit_count::float / (hit_count + miss_count) AS hit_ratio,
       memory_used_bytes, entry_count
FROM synergydb_query_cache_stats;
For write-heavy workloads (more than 50% writes), consider disabling the query cache or reducing max_size. The invalidation overhead can outweigh the read benefits when data changes frequently. Monitor the hit ratio via synergydb_query_cache_stats to make data-driven decisions.

Last updated: February 2026 · SynergyDB v0.1.0

Found an error? Edit this page on GitHub