TimescaleDB Multi-Node Exit Strategy — when and how should I migrate?
Choose whether to stay on legacy TimescaleDB multi-node patterns, consolidate onto a single-node high-availability deployment, or move to a different Postgres-plus-analytics architecture after distributed hypertables were sunset.
Blockers
- capability/timescaledb-multi-node — EOL 2026-04-02
- capability/distributed-hypertables — EOL 2026-04-02
- v2.13 is the last release that includes multi-node support for Postgres 13.
- v2.13 is the last release that includes multi-node support for Postgres 14.
- v2.13 is the last release that includes multi-node support for Postgres 15.
Who this is for
- high-scale
- real-time
- cost-sensitive
- enterprise
Candidates
Freeze on self-hosted TimescaleDB v2.13 multi-node as a short-term hold pattern
As of 2026-04-02, multi-node support and distributed hypertables are documented as sunsetted, and TimescaleDB v2.13 is the last release that includes multi-node support for Postgres 13, 14, and 15. This is the least disruptive path if you already run access-node and data-node topology and need time to unwind it. The blocker is lifecycle risk: Tiger Data's support matrix says Postgres 15 support is deprecated and will be removed from TimescaleDB in June 2026, so the legacy multi-node branch is effectively a compatibility island. Pricing is favorable for software licensing because self-hosted TimescaleDB Community Edition is free if you manage your own service, but infrastructure and operations remain your responsibility.
When to choose
Use this only for enterprise or high-scale systems that cannot redesign immediately and need a controlled transition window. The decisive factor is whether you can accept being pinned to an already-sunsetted feature set and an aging Postgres compatibility envelope.
Tradeoffs
Fastest short-term path because application and schema changes are minimal, but it maximizes future upgrade pressure and leaves you outside the main product direction.
Cautions
Do not treat this as a stable long-term platform. As of 2026-04-02, multi-node is already sunsetted, there is no forward path to newer single-service Tiger Cloud topology without migration work, and the Postgres 15 removal scheduled for June 2026 tightens the window further.
Consolidate to a single Tiger Cloud service with HA replicas
Tiger Cloud's current managed shape is a single optimized Postgres instance per service rather than the old distributed-hypertable model. As of 2026-04-02, official pricing starts at $30/month compute plus $0.177/GB-month storage on Performance, and $36/month compute plus $0.212/GB-month storage on Scale, both billed hourly. Performance includes a single-node HA replica and 3 days of point-in-time recovery, while Scale adds up to 2 HA replicas, read replicas, and 14 days of point-in-time recovery. This is the clearest in-product landing zone if you want to stay with Timescale features but stop depending on sunsetted multi-node.
When to choose
Best for low-ops, enterprise, or cost-sensitive teams that want a supported path and can fit within a single write-primary architecture plus replicas. The decisive factor is that Tiger Cloud gives a supported HA model now, but not the old write-sharded distributed hypertable topology.
Tradeoffs
You get a current, supported product direction with managed HA, upgrades, and backups, but you give up legacy multi-node write distribution and accept Tiger Cloud plan boundaries.
Cautions
Tiger Cloud services are single Postgres instances, so validate write throughput, storage growth, and read-scaling needs before assuming it replaces distributed writes one-for-one. Add-on and higher-tier costs can apply for features such as extra HA, tiered storage, I/O boost, VPC, and production support.
Move to a decoupled architecture: keep primary Postgres elsewhere and use Tiger Cloud for analytics replication or migration
As of 2026-04-02, Tiger Data documents two current migration-oriented patterns that avoid legacy multi-node: a source Postgres connector that syncs data into Tiger Cloud in real time, and live migration for moving 100GB-10TB+ with only a few minutes of downtime. The source Postgres connector uses logical replication and lets you enable hypertables, columnstore, and continuous aggregates on replica data, but it is marked early access and not supported for production use. Live migration is more production-shaped, but it does not support replicating INSERT, UPDATE, or DELETE statements on compressed data, and migrations above 400GB require a Tiger Console support request for pre-provisioning. This is the best fit when you are exiting distributed hypertables by separating operational Postgres from analytics rather than forcing everything back into one database role.
When to choose
Use this when you want a current architecture without staying on sunsetted multi-node and your operational database can remain primary elsewhere. The decisive factor is whether analytics offload or phased migration reduces risk more than a direct in-place consolidation.
Tradeoffs
This can reduce application cutover risk and preserve existing OLTP placement, but it introduces replication or migration workflow complexity and may leave you with two database surfaces.
Cautions
The source Postgres connector is early access and explicitly not for production use as of 2026-04-02. For live migration, check compressed-table write behavior, role/password migration gaps, privilege replay, connection limits, and the fact that Tiger Cloud target services do not support tablespaces and do not expose a superuser.
Try with your AI agent
$ npm install -g pocketlantern $ pocketlantern init # Restart Claude Code, Cursor, or your MCP client, then ask: # "TimescaleDB Multi-Node Exit Strategy — when and how should I migrate?"