Should I upgrade to PostgreSQL Major Upgrade—Extension Compatibility Cost now?
Database operators upgrading PostgreSQL (15→16→17) face extension incompatibilities with PostGIS, TimescaleDB, pgvector, and others. Self-hosted vs managed upgrade paths and downtime risk differ.
Blockers
- requires_version: package/postgis-3.6.1 → runtime/postgresql-13-18
- requires_version: package/pgvector-0.8.2 → runtime/postgresql-13+
- requires_version: package/pgvector-0.8.2 → runtime/postgresql-17.3+
- requires_version: package/timescaledb-2.17.0+ → runtime/postgresql-17
- do not run on 17.1 because it introduced a breaking binary interface change later reverted
- do not run on 16.5 because it introduced a breaking binary interface change later reverted
- do not run on 15.9 because it introduced a breaking binary interface change later reverted
Who this is for
- high-scale
- low-ops
- enterprise
- small-team
- compliance
- real-time
Candidates
pg_upgrade with per-extension compatibility pre-flight
The primary upgrade path for self-hosted PostgreSQL 15→16→17 when every installed extension has a binary available for the target major version. As of 2026-03-16, PostGIS 3.6.1 (released 2025-11-13) supports PostgreSQL 13–18 and is pg_upgrade-compatible after updating the old cluster first; pgvector 0.8.2 supports PostgreSQL 13+ and survives pg_upgrade with a binary reinstall followed by ALTER EXTENSION vector UPDATE; TimescaleDB 2.15.1 resolved the regprocedure blocker that previously prevented pg_upgrade, and TimescaleDB 2.17.0+ adds PostgreSQL 17 support.
When to choose
Best for low-ops + small-team or enterprise + compliance stacks where every extension passes a pre-flight version check and a maintenance window of minutes is acceptable. The decisive factor is whether a compatible binary package exists for each extension on the target PostgreSQL major version before pg_upgrade runs—if it does not, choose logical replication or dump/restore instead.
Tradeoffs
Fastest upgrade method, completing in minutes especially with pg_upgrade --link mode that hard-links data files instead of copying them. pg_upgrade does not run CREATE EXTENSION (schema definitions carry over from the old cluster), so only extension shared-object files must be pre-installed on the new cluster before the upgrade runs; pg_upgrade creates a post-upgrade script for any ALTER EXTENSION calls needed. Rollback after pg_upgrade has taken writes is unsafe and must rely on backups or snapshots taken immediately before the upgrade.
Cautions
Do not run pg_upgrade before upgrading PostGIS on the old cluster. The PostGIS installation docs require updating to a pg_upgrade-compatible PostGIS version on the source first, then running SELECT postgis_extensions_upgrade() on the upgraded cluster after. pgvector has a documented linking bug on PostgreSQL 17.0–17.2 (unresolved external symbol float_to_shortest_decimal_bufn); use PostgreSQL 17.3 or later as your target. TimescaleDB should also not be run on minor versions 17.1, 16.5, or 15.9, which introduced a breaking binary interface change that was reverted in later patch releases.
Logical replication side-by-side cutover
Build a new cluster at the target PostgreSQL major version, replicate data continuously from the source, validate under live workload, then switch traffic—cutting over in seconds. As of 2026-03-16, CloudNativePG 1.26 documents this as its supported online major upgrade path and notes that the entire cluster is unavailable during a pg_upgrade path but not during a logical replication cutover.
When to choose
Best for high-scale + real-time or enterprise + compliance teams where extension complexity, TimescaleDB hypertable structures, or near-zero-downtime requirements make pg_upgrade risky. The decisive factor is whether the extra capacity cost and orchestration overhead of running two clusters in parallel is worth a long validation window and a live rollback option.
Tradeoffs
Cutover window can be as short as seconds with a connection proxy such as PgBouncer PAUSE/RESUME, making it the closest to zero-downtime of any self-hosted upgrade path. The old cluster stays live as a rollback target until cutover. The cost is temporarily double capacity, strict schema-change discipline during the replication window (logical replication does not replicate DDL, sequence state, or large objects), and substantial orchestration overhead.
Cautions
TimescaleDB hypertable storage has documented limitations with standard PostgreSQL logical replication—verify whether your TimescaleDB version and specific feature usage are supported before relying on this as a TimescaleDB upgrade strategy. PostgreSQL also documents that upgrade of a cluster that itself uses logical replication for streaming to subscribers is only fully supported when all source members are on version 17.0 or later, so check this constraint if your source cluster has active logical replication subscribers.
Dump/restore into fresh cluster
Export data from the old cluster with pg_dump or pg_dumpall and restore into a freshly initialized PostgreSQL cluster at the target major version, with all extensions explicitly installed before restore. The most conservative and portable upgrade path, but requires the longest outage proportional to database size.
When to choose
Best for cost-sensitive + small-team or compliance + enterprise teams where the dataset is small enough to tolerate a longer outage, extension complexity makes pg_upgrade pre-flight risky, or a deterministic clean-slate rebuild is the safest audit story. Choose this when pg_upgrade is blocked by an unresolved extension issue and logical replication adds too much orchestration cost.
Tradeoffs
Easiest path to reason about—the new cluster starts fresh and every extension is explicitly installed and tested before data loads, fully sidestepping pg_upgrade extension binary compatibility questions. Most portable and auditable. The tradeoff is the longest downtime, highest I/O cost, and impracticality for large databases. pg_dump can load into newer major PostgreSQL versions but cannot dump from a server newer than its own major version.
Cautions
Run ANALYZE on all tables after restore even when statistics are included in the dump, because PostgreSQL recommends this for best optimizer behavior on a freshly restored cluster. PostGIS requires running SELECT postgis_extensions_upgrade() after installation on the fresh cluster because the restored schema references objects from the old PostGIS catalog version. Sequence state and logical replication subscription metadata are not fully preserved by dump/restore and must be verified and reset explicitly after cutover.
Try with your AI agent
$ npm install -g pocketlantern $ pocketlantern init # Restart Claude Code, Cursor, or your MCP client, then ask: # "Should I upgrade to PostgreSQL Major Upgrade—Extension Compatibility Cost now?"