|
| 1 | +--- |
| 2 | +keywords: [built-in pipelines, greptime_identity, JSON logs, log processing, time index, pipeline, GreptimeDB] |
| 3 | +description: Learn about GreptimeDB's built-in pipelines, including the greptime_identity pipeline for processing JSON logs with automatic schema creation, type conversion, and time index configuration. |
| 4 | +--- |
| 5 | + |
| 6 | +# Built-in Pipelines |
| 7 | + |
| 8 | +GreptimeDB offers built-in pipelines for common log formats, allowing you to use them directly without creating new pipelines. |
| 9 | + |
| 10 | +Note that the built-in pipelines are not editable. |
| 11 | +Additionally, the "greptime_" prefix of the pipeline name is reserved. |
| 12 | + |
| 13 | +## `greptime_identity` |
| 14 | + |
| 15 | +The `greptime_identity` pipeline is designed for writing JSON logs and automatically creates columns for each field in the JSON log. |
| 16 | + |
| 17 | +- The first-level keys in the JSON log are used as column names. |
| 18 | +- An error is returned if the same field has different types. |
| 19 | +- Fields with `null` values are ignored. |
| 20 | +- If time index is not specified, an additional column, `greptime_timestamp`, is added to the table as the time index to indicate when the log was written. |
| 21 | + |
| 22 | +### Type conversion rules |
| 23 | + |
| 24 | +- `string` -> `string` |
| 25 | +- `number` -> `int64` or `float64` |
| 26 | +- `boolean` -> `bool` |
| 27 | +- `null` -> ignore |
| 28 | +- `array` -> `json` |
| 29 | +- `object` -> `json` |
| 30 | + |
| 31 | + |
| 32 | +For example, if we have the following json data: |
| 33 | + |
| 34 | +```json |
| 35 | +[ |
| 36 | + {"name": "Alice", "age": 20, "is_student": true, "score": 90.5,"object": {"a":1,"b":2}}, |
| 37 | + {"age": 21, "is_student": false, "score": 85.5, "company": "A" ,"whatever": null}, |
| 38 | + {"name": "Charlie", "age": 22, "is_student": true, "score": 95.5,"array":[1,2,3]} |
| 39 | +] |
| 40 | +``` |
| 41 | + |
| 42 | +We'll merge the schema for each row of this batch to get the final schema. The table schema will be: |
| 43 | + |
| 44 | +```sql |
| 45 | +mysql> desc pipeline_logs; |
| 46 | ++--------------------+---------------------+------+------+---------+---------------+ |
| 47 | +| Column | Type | Key | Null | Default | Semantic Type | |
| 48 | ++--------------------+---------------------+------+------+---------+---------------+ |
| 49 | +| age | Int64 | | YES | | FIELD | |
| 50 | +| is_student | Boolean | | YES | | FIELD | |
| 51 | +| name | String | | YES | | FIELD | |
| 52 | +| object | Json | | YES | | FIELD | |
| 53 | +| score | Float64 | | YES | | FIELD | |
| 54 | +| company | String | | YES | | FIELD | |
| 55 | +| array | Json | | YES | | FIELD | |
| 56 | +| greptime_timestamp | TimestampNanosecond | PRI | NO | | TIMESTAMP | |
| 57 | ++--------------------+---------------------+------+------+---------+---------------+ |
| 58 | +8 rows in set (0.00 sec) |
| 59 | +``` |
| 60 | + |
| 61 | +The data will be stored in the table as follows: |
| 62 | + |
| 63 | +```sql |
| 64 | +mysql> select * from pipeline_logs; |
| 65 | ++------+------------+---------+---------------+-------+---------+---------+----------------------------+ |
| 66 | +| age | is_student | name | object | score | company | array | greptime_timestamp | |
| 67 | ++------+------------+---------+---------------+-------+---------+---------+----------------------------+ |
| 68 | +| 22 | 1 | Charlie | NULL | 95.5 | NULL | [1,2,3] | 2024-10-18 09:35:48.333020 | |
| 69 | +| 21 | 0 | NULL | NULL | 85.5 | A | NULL | 2024-10-18 09:35:48.333020 | |
| 70 | +| 20 | 1 | Alice | {"a":1,"b":2} | 90.5 | NULL | NULL | 2024-10-18 09:35:48.333020 | |
| 71 | ++------+------------+---------+---------------+-------+---------+---------+----------------------------+ |
| 72 | +3 rows in set (0.01 sec) |
| 73 | +``` |
| 74 | + |
| 75 | +### Specify time index |
| 76 | + |
| 77 | +A time index is necessary in GreptimeDB. Since the `greptime_identity` pipeline does not require a YAML configuration, you must set the time index in the query parameters if you want to use the timestamp from the log data instead of the automatically generated timestamp when the data arrives. |
| 78 | + |
| 79 | +Example of Incoming Log Data: |
| 80 | +```JSON |
| 81 | +[ |
| 82 | + {"action": "login", "ts": 1742814853} |
| 83 | +] |
| 84 | +``` |
| 85 | + |
| 86 | +To instruct the server to use ts as the time index, set the following query parameter in the HTTP header: |
| 87 | +```shell |
| 88 | +curl -X "POST" "http://localhost:4000/v1/ingest?db=public&table=pipeline_logs&pipeline_name=greptime_identity&custom_time_index=ts;epoch;s" \ |
| 89 | + -H "Content-Type: application/json" \ |
| 90 | + -H "Authorization: Basic {{authentication}}" \ |
| 91 | + -d $'[{"action": "login", "ts": 1742814853}]' |
| 92 | +``` |
| 93 | + |
| 94 | +The `custom_time_index` parameter accepts two formats, depending on the input data format: |
| 95 | +- Epoch number format: `<field_name>;epoch;<resolution>` |
| 96 | + - The field can be an integer or a string. |
| 97 | + - The resolution must be one of: `s`, `ms`, `us`, or `ns`. |
| 98 | +- Date string format: `<field_name>;datestr;<format>` |
| 99 | + - For example, if the input data contains a timestamp like `2025-03-24 19:31:37+08:00`, the corresponding format should be `%Y-%m-%d %H:%M:%S%:z`. |
| 100 | + |
| 101 | +With the configuration above, the resulting table will correctly use the specified log data field as the time index. |
| 102 | +```sql |
| 103 | +DESC pipeline_logs; |
| 104 | +``` |
| 105 | +```sql |
| 106 | ++--------+-----------------+------+------+---------+---------------+ |
| 107 | +| Column | Type | Key | Null | Default | Semantic Type | |
| 108 | ++--------+-----------------+------+------+---------+---------------+ |
| 109 | +| ts | TimestampSecond | PRI | NO | | TIMESTAMP | |
| 110 | +| action | String | | YES | | FIELD | |
| 111 | ++--------+-----------------+------+------+---------+---------------+ |
| 112 | +2 rows in set (0.02 sec) |
| 113 | +``` |
| 114 | + |
| 115 | +Here are some example of using `custom_time_index` assuming the time variable is named `input_ts`: |
| 116 | +- 1742814853: `custom_time_index=input_ts;epoch;s` |
| 117 | +- 1752749137000: `custom_time_index=input_ts;epoch;ms` |
| 118 | +- "2025-07-17T10:00:00+0800": `custom_time_index=input_ts;datestr;%Y-%m-%dT%H:%M:%S%z` |
| 119 | +- "2025-06-27T15:02:23.082253908Z": `custom_time_index=input_ts;datestr;%Y-%m-%dT%H:%M:%S%.9f%#z` |
| 120 | + |
| 121 | + |
| 122 | +### Flatten JSON objects |
| 123 | + |
| 124 | +If flattening a JSON object into a single-level structure is needed, add the `x-greptime-pipeline-params` header to the request and set `flatten_json_object` to `true`. |
| 125 | + |
| 126 | +Here is a sample request: |
| 127 | + |
| 128 | +```shell |
| 129 | +curl -X "POST" "http://localhost:4000/v1/ingest?db=<db-name>&table=<table-name>&pipeline_name=greptime_identity&version=<pipeline-version>" \ |
| 130 | + -H "Content-Type: application/x-ndjson" \ |
| 131 | + -H "Authorization: Basic {{authentication}}" \ |
| 132 | + -H "x-greptime-pipeline-params: flatten_json_object=true" \ |
| 133 | + -d "$<log-items>" |
| 134 | +``` |
| 135 | + |
| 136 | +With this configuration, GreptimeDB will automatically flatten each field of the JSON object into separate columns. For example: |
| 137 | + |
| 138 | +```JSON |
| 139 | +{ |
| 140 | + "a": { |
| 141 | + "b": { |
| 142 | + "c": [1, 2, 3] |
| 143 | + } |
| 144 | + }, |
| 145 | + "d": [ |
| 146 | + "foo", |
| 147 | + "bar" |
| 148 | + ], |
| 149 | + "e": { |
| 150 | + "f": [7, 8, 9], |
| 151 | + "g": { |
| 152 | + "h": 123, |
| 153 | + "i": "hello", |
| 154 | + "j": { |
| 155 | + "k": true |
| 156 | + } |
| 157 | + } |
| 158 | + } |
| 159 | +} |
| 160 | +``` |
| 161 | + |
| 162 | +Will be flattened to: |
| 163 | + |
| 164 | +```json |
| 165 | +{ |
| 166 | + "a.b.c": [1,2,3], |
| 167 | + "d": ["foo","bar"], |
| 168 | + "e.f": [7,8,9], |
| 169 | + "e.g.h": 123, |
| 170 | + "e.g.i": "hello", |
| 171 | + "e.g.j.k": true |
| 172 | +} |
| 173 | +``` |
| 174 | + |
| 175 | + |
| 176 | + |
0 commit comments