Which strategy should I use for KEDA PromQL?
Decide how to redesign KEDA autoscaling triggers for Kubernetes workloads now that Google Cloud's MQL deprecation has already changed the supported query path and KEDA teams need PromQL-compatible monitoring endpoints.
Blockers
- format/mql — EOL 2025-07-22
- requires_version: package/keda-prometheus-scaler → protocol/cloud-monitoring-promql-endpoint
- Lock-in via vendor/google-cloud
Who this is for
- low-ops
- microservices
- real-time
- cost-sensitive
Candidates
Use KEDA Prometheus scaler against the Cloud Monitoring PromQL endpoint
As of 2026-04-05, the MQL transition has already passed the main policy milestones: Google ended customer support for writing valid MQL on 2025-07-22, and shortly after that new MQL charts, dashboards, and alerting policies stopped being creatable in the Google Cloud console, although existing MQL assets still run and API-based creation still works. KEDA maintainers responded by deprecating the MQL-dependent GCP scalers for Cloud Tasks, Pub/Sub, and Stackdriver and recommending migration to the `prometheus` scaler with `https://monitoring.googleapis.com/v1/projects/PROJECT_ID/location/global/prometheus`. This path is the lowest-friction redesign when your scaling signal already exists as Google-managed metrics because Cloud Monitoring PromQL can query over 6,500 free metrics and free Kubernetes metrics without sending data to Managed Service for Prometheus. Query costs still matter: as of 2026-04-05, Monitoring read API calls are billed at `$0.50/million time series returned` after the first `1 million` per billing account, while write API calls are free.
When to choose
Use this when you need the fastest replacement for existing KEDA triggers that were effectively built around MQL-backed Google metrics such as Pub/Sub or Cloud Tasks, and you want the smallest operational change. It is the right default if your autoscaling inputs are already in Cloud Monitoring and you do not want to add a scrape pipeline just to restore KEDA support.
Tradeoffs
Pros: no separate Prometheus cluster, no ingestion setup for existing Google metrics, and KEDA's migration examples are direct. Cons: you are now coupled to Cloud Monitoring's PromQL semantics and Monitoring API read billing instead of upstream Prometheus behavior.
Cautions
Cloud Monitoring PromQL is not a drop-in copy of upstream Prometheus because Monarch is strongly typed; official docs note that functions like `rate()` on a GAUGE or `histogram_quantile()` on the wrong metric type can fail even if the same query works in upstream Prometheus. Existing MQL assets continuing to run is not a good reason to keep new autoscaling logic on MQL because KEDA has already redirected users toward the Prometheus scaler.
Standardize on Google Cloud Managed Service for Prometheus for autoscaling metrics
As of 2026-04-05, Managed Service for Prometheus is Google's Prometheus-compatible path for custom and application metrics, built on Monarch and queryable through Cloud Monitoring, Grafana, or any client that can read the Prometheus API. It gives you a stronger long-term migration target than console-era MQL because PromQL is the native query surface and the service supports both managed and self-deployed collection plus Prometheus rule-file migration. Pricing is sample-based rather than byte-based: `$0.06/million samples` for the first `0-50 billion` samples ingested, then `$0.048`, `$0.036`, and `$0.024` per million at higher tiers, with no additional storage or retention charge and 24 months of retention included. Default quotas are also explicit: `100,000 samples/second` ingest per project and `100 QPS` read quota per metrics scope.
When to choose
Use this when autoscaling must depend on application or custom Prometheus metrics, or when you want one PromQL surface for both KEDA and broader observability. It is the better redesign if you want managed retention, managed rule evaluation options, and a supported migration path for existing Prometheus dashboards and rules.
Tradeoffs
Pros: managed storage, long retention, Prometheus-compatible APIs, and no self-hosted TSDB operations. Cons: ingestion cost becomes the main budget driver, and high-cardinality or aggressive scrape intervals can turn autoscaling telemetry into a meaningful Cloud Monitoring bill.
Cautions
Managed Service for Prometheus still uses Cloud Monitoring retrieval under the hood, so Monitoring API read pricing applies in addition to sample-ingest pricing. Cost control matters early: Google explicitly documents filtering, sparse histogram pricing differences, and longer sampling periods as key levers, so teams should review label cardinality before moving KEDA triggers onto high-churn app metrics.
Try with your AI agent
$ npm install -g pocketlantern $ pocketlantern init # Restart Claude Code, Cursor, or your MCP client, then ask: # "Which strategy should I use for KEDA PromQL?"