Apache Druid
  • Technology
  • Use Cases
  • Powered By
  • Docs
  • Community
  • Apache
  • Download

›Data management

Getting started

  • Introduction to Apache Druid
  • Quickstart (local)
  • Single server deployment
  • Clustered deployment

Tutorials

  • Load files using SQL
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Query data
  • Aggregate data with rollup
  • Theta sketches
  • Configure data retention
  • Update existing data
  • Compact segments
  • Deleting data
  • Write an ingestion spec
  • Transform input data
  • Convert ingestion spec to SQL
  • Run with Docker
  • Kerberized HDFS deep storage
  • Get to know Query view
  • Unnesting arrays
  • Query from deep storage
  • Jupyter Notebook tutorials
  • Docker for tutorials
  • JDBC connector

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Overview
  • Ingestion concepts

    • Source input formats
    • Input sources
    • Schema model
    • Rollup
    • Partitioning
    • Task reference

    SQL-based batch

    • SQL-based ingestion
    • Key concepts
    • Security
    • Examples
    • Reference
    • Known issues

    Streaming

    • Apache Kafka ingestion
    • Apache Kafka supervisor
    • Apache Kafka operations
    • Amazon Kinesis

    Classic batch

    • JSON-based batch
    • Hadoop-based
  • Ingestion spec reference
  • Schema design tips
  • Troubleshooting FAQ

Data management

  • Overview
  • Data updates
  • Data deletion
  • Schema changes
  • Compaction
  • Automatic compaction

Querying

    Druid SQL

    • Overview and syntax
    • Query from deep storage
    • SQL data types
    • Operators
    • Scalar functions
    • Aggregation functions
    • Array functions
    • Multi-value string functions
    • JSON functions
    • All functions
    • SQL query context
    • SQL metadata tables
    • SQL query translation
  • Native queries
  • Query execution
  • Troubleshooting
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Nested columns
    • Multitenancy
    • Query caching
    • Using query caching
    • Query context

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

API reference

  • Overview
  • HTTP APIs

    • Druid SQL
    • SQL-based ingestion
    • JSON querying
    • Tasks
    • Supervisors
    • Retention rules
    • Data management
    • Automatic compaction
    • Lookups
    • Service status
    • Dynamic configuration
    • Legacy metadata

    Java APIs

    • SQL JDBC driver

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Java runtime
  • Durable storage
  • Security

    • Security overview
    • User authentication and authorization
    • LDAP auth
    • Password providers
    • Dynamic Config Providers
    • TLS support

    Performance tuning

    • Basic cluster tuning
    • Segment size optimization
    • Mixed workloads
    • HTTP compression
    • Automated metadata cleanup

    Monitoring

    • Request logging
    • Metrics
    • Alerts
  • High availability
  • Rolling updates
  • Using rules to drop and retain data
  • Migrate from firehose
  • Working with different versions of Apache Hadoop
  • Misc

    • dump-segment tool
    • reset-cluster tool
    • insert-segment-to-db tool
    • pull-deps tool
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Contribute to Druid docs
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Druid AWS RDS Module
  • Kubernetes
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Prometheus Emitter
  • Firehose (deprecated)
  • JSON-based batch (simple)
  • Realtime Process
  • kubernetes
  • Cardinality/HyperUnique aggregators
  • Select
  • Load files natively
Edit

Data deletion

By time range, manually

Apache Druid stores data partitioned by time chunk and supports deleting data for time chunks by dropping segments. This is a fast, metadata-only operation.

Deletion by time range happens in two steps:

  1. Segments to be deleted must first be marked as "unused". This can happen when a segment is dropped by a drop rule or when you manually mark a segment unused through the Coordinator API or web console. This is a soft delete: the data is not available for querying, but the segment files remains in deep storage, and the segment records remains in the metadata store.
  2. Once a segment is marked "unused", you can use a kill task to permanently delete the segment file from deep storage and remove its record from the metadata store. This is a hard delete: the data is unrecoverable unless you have a backup.

For documentation on disabling segments using the Coordinator API, see the Legacy metadata API reference.

A data deletion tutorial is available at Tutorial: Deleting data.

By time range, automatically

Druid supports load and drop rules, which are used to define intervals of time where data should be preserved, and intervals where data should be discarded. Data that falls under a drop rule is marked unused, in the same manner as if you manually mark that time range unused. This is a fast, metadata-only operation.

Data that is dropped in this way is marked unused, but remains in deep storage. To permanently delete it, use a kill task.

Specific records

Druid supports deleting specific records using reindexing with a filter. The filter specifies which data remains after reindexing, so it must be the inverse of the data you want to delete. Because segments must be rewritten to delete data in this way, it can be a time-consuming operation.

For example, to delete records where userName is 'bob' with native batch indexing, use a transformSpec with filter {"type": "not", "field": {"type": "selector", "dimension": "userName", "value": "bob"}}.

To delete the same records using SQL, use REPLACE with WHERE userName <> 'bob'.

To reindex using native batch, use the druid input source. If needed, transformSpec can be used to filter or modify data during the reindexing job. To reindex with SQL, use REPLACE <table> OVERWRITE with SELECT ... FROM <table>. (Druid does not have UPDATE or ALTER TABLE statements.) Any SQL SELECT query can be used to filter, modify, or enrich the data during the reindexing job.

Data that is deleted in this way is marked unused, but remains in deep storage. To permanently delete it, use a kill task.

Entire table

Deleting an entire table works the same way as deleting part of a table by time range. First, mark all segments unused using the Coordinator API or web console. Then, optionally, delete it permanently using a kill task.

Permanently (kill task)

Data that has been overwritten or soft-deleted still remains as segments that have been marked unused. You can use a kill task to permanently delete this data.

The available grammar is:

{
    "type": "kill",
    "id": <task_id>,
    "dataSource": <task_datasource>,
    "interval" : <all_unused_segments_in_this_interval_will_die!>,
    "context": <task context>,
    "batchSize": <optional_batch size>,
    "limit": <the maximum number of segments to delete>
}

Some of the parameters used in the task payload are further explained below:

ParameterDefaultExplanation
batchSize100Maximum number of segments that are deleted in one kill batch. Some operations on the Overlord may get stuck while a kill task is in progress due to concurrency constraints (such as in TaskLockbox). Thus, a kill task splits the list of unused segments to be deleted into smaller batches to yield the Overlord resources intermittently to other task operations.
limitnull - no limitMaximum number of segments for the kill task to delete.

WARNING: The kill task permanently removes all information about the affected segments from the metadata store and deep storage. This operation cannot be undone.

← Data updatesSchema changes →
  • By time range, manually
  • By time range, automatically
  • Specific records
  • Entire table
  • Permanently (kill task)

Technology · Use Cases · Powered by Druid · Docs · Community · Download · FAQ

 ·  ·  · 
Copyright © 2022 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.