查询是发送HTTP请求到,Broker, Historical或者Realtime节点。查询的JSON表达和每种节点类型公开相同的查询接口。
Queries are made using an HTTP REST style request to a Broker, Historical, or Realtime node. The query is expressed in JSON and each of these node types expose the same REST query interface.
We start by describing an example query with additional comments that mention possible variations. Query operators are also summarized in a table below.
Example Query "rand"
Here is the query in the examples/rand subproject (file is query.body), followed by a commented version of the same.
{
"queryType":
"groupBy",
"dataSource":
"randSeq",
"granularity": "all",
"dimensions": [],
"aggregations": [
{ "type":
"count", "name": "rows" },
{ "type":
"doubleSum", "fieldName": "events",
"name": "e" },
{ "type":
"doubleSum", "fieldName": "outColumn",
"name": "randomNumberSum" }
],
"postAggregations":
[{
"type":
"arithmetic",
"name":
"avg_random",
"fn":
"/",
"fields": [
{ "type":
"fieldAccess", "fieldName": "randomNumberSum"
},
{ "type":
"fieldAccess", "fieldName": "rows" }
]
}],
"intervals":
["2012-10-01T00:00/2020-01-01T00"]
}
This query could be
submitted via curl like so (assuming the query object is in a file
"query.json").
curl -X POST "http://host:port/druid/v2/?pretty"
-H 'content-type: application/json' -d @query.json
The
"pretty" query parameter gets the results formatted a bit nicer.
Details of Example Query "rand"
The queryType JSON
field identifies which kind of query operator is to be used, in this case it is
groupBy, the most frequently used kind (which corresponds to an internal
implementation class GroupByQuery registered as "groupBy"), and it
has a set of required fields that are also part of this query. The queryType
can also be "search" or "timeBoundary" which have similar
or different required fields summarized below:
{
"queryType":
"groupBy",
The dataSource JSON
field shown next identifies where to apply the query. In this case, randSeq
corresponds to the examples/rand/rand_realtime.spec file schema:
"dataSource": "randSeq",
The granularity JSON
field specifies the bucket size for values. It could be a built-in time
interval like "second", "minute",
"fifteen_minute", "thirty_minute", "hour" or
"day". It can also be an expression like {"type":
"period", "period":"PT6m"} meaning "6 minute
buckets". See Granularities
for more information on the different options for this field. In this example,
it is set to the special value "all" which means bucket all data points
together into the same time bucket
"granularity": "all",
The dimensions JSON
field value is an array of zero or more fields as defined in the dataSource
spec file or defined in the input records and carried forward. These are used
to constrain the grouping. If empty, then one value per time granularity bucket
is requested in the groupBy:
"dimensions": [],
A groupBy also
requires the JSON field "aggregations" (See Aggregations), which
are applied to the column specified by fieldName and the output of the
aggregation will be named according to the value in the "name" field:
"aggregations": [
{ "type":
"count", "name": "rows" },
{ "type":
"doubleSum", "fieldName": "events",
"name": "e" },
{ "type":
"doubleSum", "fieldName": "outColumn", "name":
"randomNumberSum" }
],
You can also specify
postAggregations, which are applied after data has been aggregated for the
current granularity and dimensions bucket. See Post Aggregations
for a detailed description. In the rand example, an arithmetic type operation
(division, as specified by "fn") is performed with the result
"name" of "avg_random". The "fields" field
specifies the inputs from the aggregation stage to this expression. Note that
identifiers corresponding to "name" JSON field inside the type
"fieldAccess" are required but not used outside this expression, so
they are prefixed with "dummy" for clarity:
"postAggregations": [{
"type":
"arithmetic",
"name":
"avg_random",
"fn":
"/",
"fields": [
{ "type":
"fieldAccess", "fieldName": "randomNumberSum"
},
{ "type":
"fieldAccess", "fieldName": "rows" }
]
}],
The time range(s) of
the query; data outside the specified intervals will not be used; this example
specifies from October 1, 2012 until January 1, 2020:
"intervals":
["2012-10-01T00:00/2020-01-01T00"]
}
Query Operators
The following table
summarizes query properties.
Properties shared by
all query types
property |
description |
required? |
dataSource |
query is applied to |
yes |
intervals |
range of time |
yes |
context |
This is a key-value |
no |
query type |
property |
description |
required? |
timeseries, topN, |
filter |
Specifies the |
no |
timeseries, topN, |
granularity |
the timestamp |
no |
timeseries, topN, |
aggregations |
aggregations that |
yes |
timeseries, topN, |
postAggregations |
aggregations of |
yes |
groupBy |
dimensions |
constrains the |
yes |
search |
limit |
maximum number of |
no |
search |
searchDimensions |
Dimensions to apply |
no |
search |
query |
The query portion |
yes |
Query Context
property |
default |
description |
timeout |
0 (no timeout) |
Query timeout in |
priority |
0 |
Query Priority. |
queryId |
auto-generated |
Unique identifier |
useCache |
true |
Flag indicating |
populateCache |
true |
Flag indicating |
bySegment |
false |
Return "by |
finalize |
true |
Flag indicating |
Query Cancellation
Queries can be
cancelled explicitely using their unique identifier. If the query identifier is
set at the time of query, or is otherwise known, the following endpoint can be
used on the broker or router to cancel the query.
DELETE
/druid/v2/{queryId}
For example, if the
query ID is abc123, the query can be cancelled as follows:
curl -X DELETE "http://host:port/druid/v2/abc123"