Apache Druid
  • Technology
  • Use Cases
  • Powered By
  • Docs
  • Community
  • Apache
  • Download

›HTTP APIs

Getting started

  • Introduction to Apache Druid
  • Quickstart (local)
  • Single server deployment
  • Clustered deployment

Tutorials

  • Load files using SQL
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Query data
  • Aggregate data with rollup
  • Theta sketches
  • Configure data retention
  • Update existing data
  • Compact segments
  • Deleting data
  • Write an ingestion spec
  • Transform input data
  • Convert ingestion spec to SQL
  • Run with Docker
  • Kerberized HDFS deep storage
  • Get to know Query view
  • Unnesting arrays
  • Query from deep storage
  • Jupyter Notebook tutorials
  • Docker for tutorials
  • JDBC connector

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Overview
  • Ingestion concepts

    • Source input formats
    • Input sources
    • Schema model
    • Rollup
    • Partitioning
    • Task reference

    SQL-based batch

    • SQL-based ingestion
    • Key concepts
    • Security
    • Examples
    • Reference
    • Known issues

    Streaming

    • Apache Kafka ingestion
    • Apache Kafka supervisor
    • Apache Kafka operations
    • Amazon Kinesis

    Classic batch

    • JSON-based batch
    • Hadoop-based
  • Ingestion spec reference
  • Schema design tips
  • Troubleshooting FAQ

Data management

  • Overview
  • Data updates
  • Data deletion
  • Schema changes
  • Compaction
  • Automatic compaction

Querying

    Druid SQL

    • Overview and syntax
    • Query from deep storage
    • SQL data types
    • Operators
    • Scalar functions
    • Aggregation functions
    • Array functions
    • Multi-value string functions
    • JSON functions
    • All functions
    • SQL query context
    • SQL metadata tables
    • SQL query translation
  • Native queries
  • Query execution
  • Troubleshooting
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Nested columns
    • Multitenancy
    • Query caching
    • Using query caching
    • Query context

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

API reference

  • Overview
  • HTTP APIs

    • Druid SQL
    • SQL-based ingestion
    • JSON querying
    • Tasks
    • Supervisors
    • Retention rules
    • Data management
    • Automatic compaction
    • Lookups
    • Service status
    • Dynamic configuration
    • Legacy metadata

    Java APIs

    • SQL JDBC driver

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Java runtime
  • Durable storage
  • Security

    • Security overview
    • User authentication and authorization
    • LDAP auth
    • Password providers
    • Dynamic Config Providers
    • TLS support

    Performance tuning

    • Basic cluster tuning
    • Segment size optimization
    • Mixed workloads
    • HTTP compression
    • Automated metadata cleanup

    Monitoring

    • Request logging
    • Metrics
    • Alerts
  • High availability
  • Rolling updates
  • Using rules to drop and retain data
  • Migrate from firehose
  • Working with different versions of Apache Hadoop
  • Misc

    • dump-segment tool
    • reset-cluster tool
    • insert-segment-to-db tool
    • pull-deps tool
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Contribute to Druid docs
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Druid AWS RDS Module
  • Kubernetes
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Prometheus Emitter
  • Firehose (deprecated)
  • JSON-based batch (simple)
  • Realtime Process
  • kubernetes
  • Cardinality/HyperUnique aggregators
  • Select
  • Load files natively
Edit

Service status API

This document describes the API endpoints to retrieve service status, cluster information for Apache Druid.

In this document, http://SERVICE_IP:SERVICE_PORT is a placeholder for the server address of deployment and the service port. For example, on the quickstart configuration, replace http://ROUTER_IP:ROUTER_PORT with http://localhost:8888.

Common

All services support the following endpoints.

You can use each endpoint with the ports for each type of service. The following table contains port addresses for a local configuration:

ServicePort address
Coordinator8081
Overlord8081
Router8888
Broker8082
Historical8083
MiddleManager8091

Get service information

Retrieves the Druid version, loaded extensions, memory used, total memory, and other useful information about the individual service.

Modify the host and port for the endpoint to match the service to query. Refer to the default service ports for the port numbers.

URL

GET /status

Responses

200 SUCCESS


Successfully retrieved service information


Sample request

cURL
HTTP
curl "http://ROUTER_IP:ROUTER_PORT/status"
GET /status HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT

Sample response

Click to show sample response

{
  "version": "26.0.0",
  "modules": [
      {
          "name": "org.apache.druid.common.aws.AWSModule",
          "artifact": "druid-aws-common",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.common.gcp.GcpModule",
          "artifact": "druid-gcp-common",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.storage.hdfs.HdfsStorageDruidModule",
          "artifact": "druid-hdfs-storage",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.indexing.kafka.KafkaIndexTaskModule",
          "artifact": "druid-kafka-indexing-service",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.query.aggregation.datasketches.theta.SketchModule",
          "artifact": "druid-datasketches",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.query.aggregation.datasketches.theta.oldapi.OldApiSketchModule",
          "artifact": "druid-datasketches",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.query.aggregation.datasketches.quantiles.DoublesSketchModule",
          "artifact": "druid-datasketches",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.query.aggregation.datasketches.tuple.ArrayOfDoublesSketchModule",
          "artifact": "druid-datasketches",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.query.aggregation.datasketches.hll.HllSketchModule",
          "artifact": "druid-datasketches",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.query.aggregation.datasketches.kll.KllSketchModule",
          "artifact": "druid-datasketches",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.msq.guice.MSQExternalDataSourceModule",
          "artifact": "druid-multi-stage-query",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.msq.guice.MSQIndexingModule",
          "artifact": "druid-multi-stage-query",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.msq.guice.MSQDurableStorageModule",
          "artifact": "druid-multi-stage-query",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.msq.guice.MSQServiceClientModule",
          "artifact": "druid-multi-stage-query",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.msq.guice.MSQSqlModule",
          "artifact": "druid-multi-stage-query",
          "version": "26.0.0"
      },
      {
          "name": "org.apache.druid.msq.guice.SqlTaskModule",
          "artifact": "druid-multi-stage-query",
          "version": "26.0.0"
      }
  ],
  "memory": {
      "maxMemory": 268435456,
      "totalMemory": 268435456,
      "freeMemory": 139060688,
      "usedMemory": 129374768,
      "directMemory": 134217728
  }
}

Get service health

Retrieves the online status of the individual Druid service. It is a simple health check to determine if the service is running and accessible. If online, it will always return a boolean true value, indicating that the service can receive API calls. This endpoint is suitable for automated health checks.

Modify the host and port for the endpoint to match the service to query. Refer to the default service ports for the port numbers.

Additional checks for readiness should use the Historical segment readiness and Broker query readiness endpoints.

URL

GET /status/health

Responses

200 SUCCESS


Successfully retrieved service health

Sample request

cURL
HTTP
curl "http://ROUTER_IP:ROUTER_PORT/status/health"
GET /status/health HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT

Sample response

Click to show sample response

true

Get configuration properties

Retrieves the current configuration properties of the individual service queried.

Modify the host and port for the endpoint to match the service to query. Refer to the default service ports for the port numbers.

URL

GET /status/properties

Responses

200 SUCCESS


Successfully retrieved service configuration properties

Sample request

cURL
HTTP
curl "http://ROUTER_IP:ROUTER_PORT/status/properties"
GET /status/properties HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT

Sample response

Click to show sample response

{
{
  "gopherProxySet": "false",
  "awt.toolkit": "sun.lwawt.macosx.LWCToolkit",
  "druid.monitoring.monitors": "[\"org.apache.druid.java.util.metrics.JvmMonitor\"]",
  "java.specification.version": "11",
  "sun.cpu.isalist": "",
  "druid.plaintextPort": "8888",
  "sun.jnu.encoding": "UTF-8",
  "druid.indexing.doubleStorage": "double",
  "druid.metadata.storage.connector.port": "1527",
  "java.class.path": "/Users/genericUserPath",
  "log4j.shutdownHookEnabled": "true",
  "java.vm.vendor": "Homebrew",
  "sun.arch.data.model": "64",
  "druid.extensions.loadList": "[\"druid-hdfs-storage\", \"druid-kafka-indexing-service\", \"druid-datasketches\", \"druid-multi-stage-query\"]",
  "java.vendor.url": "https://github.com/Homebrew/homebrew-core/issues",
  "druid.router.coordinatorServiceName": "druid/coordinator",
  "user.timezone": "UTC",
  "druid.global.http.eagerInitialization": "false",
  "os.name": "Mac OS X",
  "java.vm.specification.version": "11",
  "sun.java.launcher": "SUN_STANDARD",
  "user.country": "US",
  "sun.boot.library.path": "/opt/homebrew/Cellar/openjdk@11/11.0.19/libexec/openjdk.jdk/Contents/Home/lib",
  "sun.java.command": "org.apache.druid.cli.Main server router",
  "http.nonProxyHosts": "local|*.local|169.254/16|*.169.254/16",
  "jdk.debug": "release",
  "druid.metadata.storage.connector.host": "localhost",
  "sun.cpu.endian": "little",
  "druid.zk.paths.base": "/druid",
  "user.home": "/Users/genericUser",
  "user.language": "en",
  "java.specification.vendor": "Oracle Corporation",
  "java.version.date": "2023-04-18",
  "java.home": "/opt/homebrew/Cellar/openjdk@11/11.0.19/libexec/openjdk.jdk/Contents/Home",
  "druid.service": "druid/router",
  "druid.selectors.coordinator.serviceName": "druid/coordinator",
  "druid.metadata.storage.connector.connectURI": "jdbc:derby://localhost:1527/var/druid/metadata.db;create=true",
  "file.separator": "/",
  "druid.selectors.indexing.serviceName": "druid/overlord",
  "java.vm.compressedOopsMode": "Zero based",
  "druid.metadata.storage.type": "derby",
  "line.separator": "\n",
  "druid.log.path": "/Users/genericUserPath",
  "java.vm.specification.vendor": "Oracle Corporation",
  "java.specification.name": "Java Platform API Specification",
  "druid.indexer.logs.directory": "var/druid/indexing-logs",
  "java.awt.graphicsenv": "sun.awt.CGraphicsEnvironment",
  "druid.router.defaultBrokerServiceName": "druid/broker",
  "druid.storage.storageDirectory": "var/druid/segments",
  "sun.management.compiler": "HotSpot 64-Bit Tiered Compilers",
  "ftp.nonProxyHosts": "local|*.local|169.254/16|*.169.254/16",
  "java.runtime.version": "11.0.19+0",
  "user.name": "genericUser",
  "druid.indexer.logs.type": "file",
  "druid.host": "localhost",
  "log4j2.is.webapp": "false",
  "path.separator": ":",
  "os.version": "12.6.5",
  "druid.lookup.enableLookupSyncOnStartup": "false",
  "java.runtime.name": "OpenJDK Runtime Environment",
  "druid.zk.service.host": "localhost",
  "file.encoding": "UTF-8",
  "druid.sql.planner.useGroupingSetForExactDistinct": "true",
  "druid.router.managementProxy.enabled": "true",
  "java.vm.name": "OpenJDK 64-Bit Server VM",
  "java.vendor.version": "Homebrew",
  "druid.startup.logging.logProperties": "true",
  "java.vendor.url.bug": "https://github.com/Homebrew/homebrew-core/issues",
  "log4j.shutdownCallbackRegistry": "org.apache.druid.common.config.Log4jShutdown",
  "java.io.tmpdir": "var/tmp",
  "druid.sql.enable": "true",
  "druid.emitter.logging.logLevel": "info",
  "java.version": "11.0.19",
  "user.dir": "/Users/genericUser/Downloads/apache-druid-26.0.0",
  "os.arch": "aarch64",
  "java.vm.specification.name": "Java Virtual Machine Specification",
  "druid.node.type": "router",
  "java.awt.printerjob": "sun.lwawt.macosx.CPrinterJob",
  "sun.os.patch.level": "unknown",
  "java.util.logging.manager": "org.apache.logging.log4j.jul.LogManager",
  "java.library.path": "/Users/genericUserPath",
  "java.vendor": "Homebrew",
  "java.vm.info": "mixed mode",
  "java.vm.version": "11.0.19+0",
  "druid.emitter": "noop",
  "sun.io.unicode.encoding": "UnicodeBig",
  "druid.storage.type": "local",
  "druid.expressions.useStrictBooleans": "true",
  "java.class.version": "55.0",
  "socksNonProxyHosts": "local|*.local|169.254/16|*.169.254/16",
  "druid.server.hiddenProperties": "[\"druid.s3.accessKey\",\"druid.s3.secretKey\",\"druid.metadata.storage.connector.password\", \"password\", \"key\", \"token\", \"pwd\"]"
}

Get node discovery status and cluster integration confirmation

Retrieves a JSON map of the form {"selfDiscovered": true/false}, indicating whether the node has received a confirmation from the central node discovery mechanism (currently ZooKeeper) of the Druid cluster that the node has been added to the cluster.

Only consider a Druid node "healthy" or "ready" in automated deployment/container management systems when this endpoint returns {"selfDiscovered": true}. Nodes experiencing network issues may become isolated and are not healthy. For nodes that use Zookeeper segment discovery, a response of {"selfDiscovered": true} indicates that the node's Zookeeper client has started receiving data from the Zookeeper cluster, enabling timely discovery of segments and other nodes.

URL

GET /status/selfDiscovered/status

Responses

200 SUCCESS


Node was successfully added to the cluster

Sample request

cURL
HTTP
curl "http://ROUTER_IP:ROUTER_PORT/status/selfDiscovered/status"
GET /status/selfDiscovered/status HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT

Sample response

Click to show sample response

{
  "selfDiscovered": true
}

Get node self-discovery status

Returns an HTTP status code to indicate node discovery within the Druid cluster. This endpoint is similar to the status/selfDiscovered/status endpoint, but relies on HTTP status codes alone. Use this endpoint for monitoring checks that are unable to examine the response body. For example, AWS load balancer health checks.

URL

GET /status/selfDiscovered

Responses

200 SUCCESS
503 SERVICE UNAVAILABLE


Successfully retrieved node status


Unsuccessful node self-discovery

Sample request

cURL
HTTP
curl "http://ROUTER_IP:ROUTER_PORT/status/selfDiscovered"
GET /status/selfDiscovered HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT

Sample response

A successful response to this endpoint results in an empty response body.

Coordinator

Get Coordinator leader address

Retrieves the address of the current leader Coordinator of the cluster. If any request is sent to a non-leader Coordinator, the request is automatically redirected to the leader Coordinator.

URL

GET /druid/coordinator/v1/leader

Responses

200 SUCCESS


Successfully retrieved leader Coordinator address


Sample request

cURL
HTTP
curl "http://ROUTER_IP:ROUTER_PORT/druid/coordinator/v1/leader"
GET /druid/coordinator/v1/leader HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT

Sample response

Click to show sample response

http://localhost:8081

Get Coordinator leader status

Retrieves a JSON object with a leader key. Returns true if this server is the current leader Coordinator of the cluster. To get the individual address of the leader Coordinator node, see the leader endpoint.

Use this endpoint as a load balancer status check when you only want the active leader to be considered in-service at the load balancer.

URL

GET /druid/coordinator/v1/isLeader

Responses

200 SUCCESS
404 NOT FOUND


Current server is the leader


Current server is not the leader


Sample request

cURL
HTTP
curl "http://COORDINATOR_IP:COORDINATOR_PORT/druid/coordinator/v1/isLeader"
GET /druid/coordinator/v1/isLeader HTTP/1.1
Host: http://COORDINATOR_IP:COORDINATOR_PORT

Sample response

Click to show sample response

{
  "leader": true
}

Overlord

Get Overlord leader address

Retrieves the address of the current leader Overlord of the cluster. In a cluster of multiple Overlords, only one Overlord assumes the leading role, while the remaining Overlords remain on standby.

URL

GET /druid/indexer/v1/leader

Responses

200 SUCCESS


Successfully retrieved leader Overlord address


Sample request

cURL
HTTP
curl "http://ROUTER_IP:ROUTER_PORT/druid/indexer/v1/leader"
GET /druid/indexer/v1/leader HTTP/1.1
Host: http://ROUTER_IP:ROUTER_PORT

Sample response

Click to show sample response

http://localhost:8081

Get Overlord leader status

Retrieves a JSON object with a leader property. The value can be true or false, indicating if this server is the current leader Overlord of the cluster. To get the individual address of the leader Overlord node, see the leader endpoint.

Use this endpoint as a load balancer status check when you only want the active leader to be considered in-service at the load balancer.

URL

GET /druid/indexer/v1/isLeader

Responses

200 SUCCESS
404 NOT FOUND


Current server is the leader


Current server is not the leader


Sample request

cURL
HTTP
curl "http://OVERLORD_IP:OVERLORD_PORT/druid/indexer/v1/isLeader"
GET /druid/indexer/v1/isLeader HTTP/1.1
Host: http://OVERLORD_IP:OVERLORD_PORT

Sample response

Click to show sample response

{
  "leader": true
}

MiddleManager

Get MiddleManager state status

Retrieves the enabled state of the MiddleManager. Returns JSON object keyed by the combined druid.host and druid.port with a boolean true or false state as the value.

URL

GET /druid/worker/v1/enabled

Responses

200 SUCCESS


Successfully retrieved MiddleManager state


Sample request

cURL
HTTP
curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/enabled"
GET /druid/worker/v1/enabled HTTP/1.1
Host: http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT

Sample response

Click to show sample response

{
  "localhost:8091": true
}

Get active tasks

Retrieves a list of active tasks being run on MiddleManager. Returns JSON list of task ID strings. Note that for normal usage, you should use the /druid/indexer/v1/tasks Tasks API endpoint or one of the task state specific variants instead.

URL

GET /druid/worker/v1/tasks

Responses

200 SUCCESS


Successfully retrieved active tasks


Sample request

cURL
HTTP
curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/tasks"
GET /druid/worker/v1/tasks HTTP/1.1
Host: http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT

Sample response

Click to show sample response

[
  "index_parallel_wikipedia_mgchefio_2023-06-13T22:18:05.360Z"
]

Get task log

Retrieves task log output stream by task ID. For normal usage, you should use the /druid/indexer/v1/task/{taskId}/log Tasks API endpoint instead.

URL

GET /druid/worker/v1/task/:taskId/log

Shut down running task

Shuts down a running task by ID. For normal usage, you should use the /druid/indexer/v1/task/:taskId/shutdown Tasks API endpoint instead.

URL

POST /druid/worker/v1/task/:taskId/shutdown

Responses

200 SUCCESS


Successfully shut down a task


Sample request

The following example shuts down a task with specified ID index_kafka_wikiticker_f7011f8ffba384b_fpeclode.

cURL
HTTP
curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/task/index_kafka_wikiticker_f7011f8ffba384b_fpeclode/shutdown"
POST /druid/worker/v1/task/index_kafka_wikiticker_f7011f8ffba384b_fpeclode/shutdown HTTP/1.1
Host: http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT

Sample response

Click to show sample response

{
  "task":"index_kafka_wikiticker_f7011f8ffba384b_fpeclode"
}

Disable MiddleManager

Disables a MiddleManager, causing it to stop accepting new tasks but complete all existing tasks. Returns a JSON object keyed by the combined druid.host and druid.port.

URL

POST /druid/worker/v1/disable

Responses

200 SUCCESS


Successfully disabled MiddleManager

Sample request

cURL
HTTP
curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/disable"
POST /druid/worker/v1/disable HTTP/1.1
Host: http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT

Sample response

Click to show sample response

{
  "localhost:8091":"disabled"
}

Enable MiddleManager

Enables a MiddleManager, allowing it to accept new tasks again if it was previously disabled. Returns a JSON object keyed by the combined druid.host and druid.port.

URL

POST /druid/worker/v1/enable

Responses

200 SUCCESS


Successfully enabled MiddleManager

Sample request

cURL
HTTP
curl "http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT/druid/worker/v1/enable"
POST /druid/worker/v1/enable HTTP/1.1
Host: http://MIDDLEMANAGER_IP:MIDDLEMANAGER_PORT

Sample response

Click to show sample response

{
  "localhost:8091":"enabled"
}

Historical

Get segment load status

Retrieves a JSON object of the form {"cacheInitialized":value}, where value is either true or false indicating if all segments in the local cache have been loaded.

Use this endpoint to know when a Broker service is ready to accept queries after a restart.

URL

GET /druid/historical/v1/loadstatus

Responses

200 SUCCESS


Successfully retrieved status

Sample request

cURL
HTTP
curl "http://HISTORICAL_IP:HISTORICAL_PORT/druid/historical/v1/loadstatus"
GET /druid/historical/v1/loadstatus HTTP/1.1
Host: http://HISTORICAL_IP:HISTORICAL_PORT

Sample response

Click to show sample response

{
  "cacheInitialized": true
}

Get segment readiness

Retrieves a status code to indicate if all segments in the local cache have been loaded. Similar to /druid/historical/v1/loadstatus, but instead of returning JSON with a flag, it returns status codes.

URL

GET /druid/historical/v1/readiness

Responses

200 SUCCESS
503 SERVICE UNAVAILABLE


Segments in local cache successfully loaded


Segments in local cache have not been loaded

Sample request

cURL
HTTP
curl "http://HISTORICAL_IP:HISTORICAL_PORT/druid/historical/v1/readiness"
GET /druid/historical/v1/readiness HTTP/1.1
Host: http://HISTORICAL_IP:HISTORICAL_PORT

Sample response

A successful response to this endpoint results in an empty response body.

Load Status

Get Broker query load status

Retrieves a flag indicating if the Broker knows about all segments in the cluster. Use this endpoint to know when a Broker service is ready to accept queries after a restart.

URL

GET /druid/broker/v1/loadstatus

Responses

200 SUCCESS


Segments successfully loaded

Sample request

cURL
HTTP
curl "http://BROKER_IP:BROKER_PORT/druid/broker/v1/loadstatus"
GET /druid/broker/v1/loadstatus HTTP/1.1
Host: http://<BROKER_IP>:<BROKER_PORT>

Sample response

Click to show sample response

{
  "inventoryInitialized": true
}

Get Broker query readiness

Retrieves a status code to indicate Broker readiness. Readiness signifies the Broker knows about all segments in the cluster and is ready to accept queries after a restart. Similar to /druid/broker/v1/loadstatus, but instead of returning a JSON, it returns status codes.

URL

GET /druid/broker/v1/readiness

Responses

200 SUCCESS
503 SERVICE UNAVAILABLE


Segments successfully loaded


Segments have not been loaded

Sample request

cURL
HTTP
curl "http://BROKER_IP:BROKER_PORT/druid/broker/v1/readiness"
GET /druid/broker/v1/readiness HTTP/1.1
Host: http://BROKER_IP:BROKER_PORT

Sample response

A successful response to this endpoint results in an empty response body.

← LookupsDynamic configuration →
  • Common
    • Get service information
    • Get service health
    • Get configuration properties
    • Get node discovery status and cluster integration confirmation
    • Get node self-discovery status
  • Coordinator
    • Get Coordinator leader address
    • Get Coordinator leader status
  • Overlord
    • Get Overlord leader address
    • Get Overlord leader status
  • MiddleManager
    • Get MiddleManager state status
    • Get active tasks
    • Get task log
    • Shut down running task
    • Disable MiddleManager
    • Enable MiddleManager
  • Historical
    • Get segment load status
    • Get segment readiness
  • Load Status
    • Get Broker query load status
    • Get Broker query readiness

Technology · Use Cases · Powered by Druid · Docs · Community · Download · FAQ

 ·  ·  · 
Copyright © 2022 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.