MeshInfo uses PostgreSQL as its storage backend. This document covers configuration, schema, and operational guidance.
Add the following to your config.toml:
[storage]
read_from = "postgres"
write_to = ["postgres"]
[storage.postgres]
enabled = true
host = "postgres"
port = 5432
database = "meshinfo"
username = "postgres"
password = "your_password"
min_pool_size = 1
max_pool_size = 5read_from: Must be"postgres"write_to: Must be["postgres"]postgres.enabled: Must betruepostgres.host: Database server hostname (use"localhost"if not using Docker)postgres.port: Database server port (default: 5432)postgres.database: Database namepostgres.username: Database userpostgres.password: Database passwordpostgres.min_pool_size: Minimum connection pool size (default: 1)postgres.max_pool_size: Maximum connection pool size (default: 5)
The schema is automatically created on first run. Key tables:
nodes: Core node information (ID, name, hardware, status)node_positions: Historical position datanode_neighborinfo: Neighbor relationships (JSONB)node_telemetry_current: Most recent telemetry per nodetelemetry: Complete telemetry history (JSONB payloads)chat_channels: Chat channel metadatachat_messages: All chat messagestraceroutes: Complete traceroute history (JSONB payloads)
- MQTT message received
- Data stored in memory (MemoryDataStore) for internal use
- Real-time write to PostgreSQL (non-blocking, errors logged)
- Application starts
- PostgreSQL connection established
- API queries PostgreSQL directly for each request
- Lower memory footprint, always current data
- PostgreSQL write failures are logged but never block application execution
- Connection pool handles transient network issues
- Failed writes are logged for manual investigation
All writes happen in real-time as data arrives from MQTT, ensuring minimal data loss on crash.
asyncpg connection pooling handles concurrent writes efficiently. Tune min_pool_size and max_pool_size based on your load.
Key indexes are maintained on node IDs, timestamps, and foreign key relationships for fast queries even with large datasets.
PostgreSQL operations are logged at INFO and ERROR levels:
INFO: PostgreSQL mode: Data will be queried directly from database
ERROR: Failed to write node abc123 to Postgres: connection timeout
Monitor these logs to ensure healthy operation.
If PostgreSQL connection fails on startup, MeshInfo will log the error and exit. Check:
- PostgreSQL container/service is running
- Connection settings in
config.tomlare correct (host,port,username,password) - Network connectivity between MeshInfo and the database
To verify record counts:
SELECT COUNT(*) FROM nodes;
SELECT COUNT(*) FROM chat_messages;
SELECT COUNT(*) FROM telemetry;
SELECT COUNT(*) FROM traceroutes;If you are upgrading from an older version that used JSON file storage, a one-time migration script is available. This script must be run against your legacy deployment before upgrading — it requires:
- The JSON data files still present on disk (
nodes.json,chat.json,telemetry.json,traceroutes.json) - The old
[paths]section in your config (withpaths.datapointing to those files)
From that legacy environment, run:
docker exec -it meshinfo-meshinfo-1 python3 scripts/migrate_json_to_postgres.pyThis reads your existing JSON data files and imports them into PostgreSQL. Once complete, update your config to the new PostgreSQL-only format (see config.toml.sample) and restart.
- Use strong passwords for PostgreSQL
- Consider SSL/TLS for database connections in production
- Restrict database access via network policies
- Set up regular PostgreSQL backups (e.g.,
pg_dump) - Keep PostgreSQL updated with security patches
For issues or questions:
- Open an issue on GitHub
- Join #meshinfo on SacValleyMesh Discord
- Review application logs for detailed error messages