一 了解InfluxDB的必要性
时序数据库主要存放的数据 Time series data is a series of data points each associated with a specific time. Examples include:
- Server performance metrics
- Financial averages over time
- Sensor data, such as temperature, barometric pressure, wind speeds, etc. 时序数据库和关系数据库的区别 Relational databases can be used to store and analyze time series data, but depending on the precision of your data, a query can involve potentially millions of rows. InfluxDB is purpose-built to store and query data by time, providing out-of-the-box functionality that optionally downsamples data after a specific age and a query engine optimized for time-based data.
二 基本概念
2.1 database \& duration database A logical container for users, retention policies, continuous queries, and time series data. duration The attribute of the retention policy that determines how long InfluxDB stores data. Data older than the duration are automatically dropped from the database. 2.2 field The key-value pair in an InfluxDB data structure that records metadata and the actual data value. Fields are required in InfluxDB data structures and they are not indexed - queries on field values scan all points that match the specified time range and, as a result, are not performant relative to tags. Field keys are strings and they store metadata.Field values are the actual data; they can be strings, floats, integers, or booleans. A field value is always associated with a timestamp. 2.3 Tags Tags are optional. The key-value pair in the InfluxDB data structure that records metadata.You don't need to have tags in your data structure, but it's generally a good idea to make use of them because, unlike fields, tags are indexed. This means that queries on tags are faster and that tags are ideal for storing commonly-queried metadata. Tags 与 fields 的区别 Tags are indexed and fields are not indexed. This means that queries on tags are more performant than those on fields. Tags 与 fields 的使用场景
(1)Store commonly-queried meta data in tags
(2)Store data in tags if you plan to use them with the InfluxQL GROUP BY
clause
(3)Store data in fields if you plan to use them with an InfluxQL function
(4)Store numeric values as fields (tag values only support string values)
2.4 measurement
The measurement acts as a container for tags, fields, and the time
column, and the measurement name is the description of the data that are stored in the associated fields. Measurement names are strings, and, for any SQL users out there, a measurement is conceptually similar to a table.
2.5 point
In InfluxDB, a point represents a single data record, similar to a row in a SQL database table. Each point:
- has a measurement, a tag set, a field key, a field value, and a timestamp;
- is uniquely identified by its series and timestamp. You cannot store more than one point with the same timestamp in a series. If you write a point to a series with a timestamp that matches an existing point, the field set becomes a union of the old and new field set, and any ties go to the new field set. 2.6 series In InfluxDB, a series is a collection of points that share a measurement, tag set, and field key. A point represents a single data record that has four components: a measurement, tag set, field set, and a timestamp. A point is uniquely identified by its series and timestamp. series key A series key identifies a particular series by measurement, tag set, and field key.
三 查询
3.1 正则模糊查询
1.实现查询以给定字段开始的数据
select fieldName from measurementName where fieldName=~/^给定字段/
2.实现查询以给定字段结束的数据
select fieldName from measurementName where fieldName=~/给定字段$/
3.实现查询包含给定字段数据
select fieldName from measurementName where fieldName=~/给定字段/
3.2 Select 注意事项:
必须包含field key
A query requires at least one field key in the SELECT
clause to return data. If the SELECT
clause only includes a single tag key or several tag keys, the query returns an empty response. This behavior is a result of how the system stores data.
3.3 Where 限定
使用单引号,否则无数据返回或报错
(1)Single quote string field values in the WHERE
clause. Queries with unquoted string field values or double quoted string field values will not return any data and, in most cases,will not return an error.
(2)Single quote tag values in the WHERE
clause. Queries with unquoted tag values or double quoted tag values will not return any data and, in most cases, will not return an error.
3.4 Group By
(1)Note that the GROUP BY
clause must come after the WHERE
clause.
(2)The GROUP BY
clause groups query results by: one or more specified tags ;specified time interval。
(3)You cannot use GROUP BY
to group fields.
(4)fill()
changes the value reported for time intervals that have no data.
By default, a GROUP BY time()
interval with no data reports null
as its value in the output column. fill()
changes the value reported for time intervals that have no data. Note that fill()
must go at the end of the GROUP BY
clause if you'reGROUP(ing) BY
several things (for example, both tags and a time interval).
3.5 ORDER BY time DESC
By default, InfluxDB returns results in ascending time order; the first point returned has the oldest timestamp and the last point returned has the most recent timestamp.ORDER BY time DESC
reverses that order such that InfluxDB returns the points with the most recent timestamps first.
注意:ORDER by time DESC
must appear after the GROUP BY
clause if the query includes a GROUP BY
clause. ORDER by time DESC
must appear after the WHERE
clause if the query includes a WHERE
clause and no GROUP BY
clause.
四.SHOW CARDINALITY
是用于估计或精确计算measurement、序列、tag key、tag value和field key的基数的一组命令。 SHOW CARDINALITY命令有两种可用的版本:估计和精确。估计值使用草图进行计算,对于所有基数大小来说,这是一个安全默认值。精确值是直接对TSM(Time-Structured Merge Tree)数据进行计数,但是,对于基数大的数据来说,运行成本很高。
下面以tag key、tag value为例。 4.1 SHOW TAG KEY CARDINALITY
估计或精确计算tag key集的基数。
ON <database>、FROM <sources>、WITH KEY = <key>、WHERE <condition>、GROUP BY <dimensions>和LIMIT/OFFSET子句是可选的。当使用这些查询子句时,查询将回退到精确计数(exect count)。当启用Time Series Index(TSI)时,才支持对time进行过滤。不支持在WHERE子句中使用time。
举例:
-- show estimated tag key cardinality
SHOW TAG KEY CARDINALITY
----计算精确值-- show exact tag key cardinality
SHOW TAG KEY EXACT CARDINALITY
4.2 SHOW TAG VALUES CARDINALITY
估计或精确计算指定tag key对应的tag value的基数。
ON <database>、FROM <sources>、WITH KEY = <key>、WHERE <condition>、GROUP BY <dimensions>和LIMIT/OFFSET子句是可选的。当使用这些查询子句时,查询将回退到精确计数(exect count)。当启用Time Series Index(TSI)时,才支持对time进行过滤。
举例
-- show estimated tag key values cardinality for a specified tag key
SHOW TAG VALUES CARDINALITY WITH KEY = "myTagKey"
-- show estimated tag key values cardinality for a specified tag key
SHOW TAG VALUES CARDINALITY WITH KEY = "myTagKey"
-----计算精确值
-- show exact tag key values cardinality for a specified tag key
SHOW TAG VALUES EXACT CARDINALITY WITH KEY = "myTagKey"
-- show exact tag key values cardinality for a specified tag key
SHOW TAG VALUES EXACT CARDINALITY WITH KEY = "myTagKey"
4.3 应用场景举例
例如,前面的分享,们通过Telegraf 将server的监控数据保存到了InfluxDB中,其中CPU指标是必不可少的(telegraf.conf 设置)。假如有一天,们需要统计telegraf一共部署了多少台。其实就可以通过SHOW TAG VALUES EXACT CARDINALITY 获得。
SQL 语句如下:
SHOW TAG VALUES EXACT CARDINALITY from "cpu" WITH KEY = "host"
即查看cpu 中 host 的key值有多少个。因为通过telegraf.conf的设置,一台Server 对应一个唯一的host值,host值有多少个,就有多少台Server已部署了telegraf。
5 Drop 与 Delete
5.1 series
The DROP SERIES
query deletes all points from a series in a database, and it drops the series from the index.
The query takes the following form, where you must specify either the FROM
clause or the WHERE
clause.
语法如下:
DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_value>'
A successful DROP SERIES
query returns an empty result.
Drop all points in the series that have a specific tag pair from all measurements in the database(即,如不指定from,将会把符合条件的所有表tag数据删除).
与Delete series 的区别是:
The DELETE
query deletes all points from a series in a database. UnlikeDROP SERIES
, DELETE
does not drop the series from the index.
5.2 measurement_name
DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval>]
只允许根据tag和时间来进行删除操作. measurement的drop,是比较消耗资源的,并且操作时间相对较长。看有网友的分享,建议 在 drop measurement 之前先删除所有的 tag。
即先执行:
DROP SERIES FROM 'measurement_name'
然后再执行:
drop measurement <measurement_name>
六 常用函数部分
常用函数汇总如下:
|-----------|-------------------------------------|-------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **类型** | **函数名** | **备注说明1** | **备注说明2** |
| **聚合类** | COUNT() | Returns the number of non-null field values. | |
| **聚合类** | DISTINCT() | Returns the list of unique field values. | `DISTINCT()` often returns several results with the same timestamp; InfluxDB assumes points with the same series and timestamp are duplicate points and simply overwrites any duplicate point with the most recent point in the destination measurement. |
| **聚合类** | INTEGRAL() | Returns the area under the curve for subsequent field values. | InfluxDB calculates the area under the curve for subsequent field values and converts those results into the summed area per `unit`. The `unit` argument is an integer followed by a duration literal and it is optional. If the query does not specify the `unit`, the unit defaults to one second (`1s`). |
| **聚合类** | MEAN() | Returns the arithmetic mean (average) of field values. | |
| **聚合类** | MEDIAN() | Returns the middle value from a sorted list of field values. | `MEDIAN()` is nearly equivalent to `PERCENTILE(field_key, 50)`, except `MEDIAN()` returns the average of the two middle field values if the field contains an even number of values. |
| **聚合类** | MODE() | Returns the most frequent value in a list of field values. | `MODE()` returns the field value with the earliest timestamp if there's a tie between two or more values for the maximum number of occurrences. |
| **聚合类** | SPREAD() | Returns the difference between the minimum and maximum field values. | |
| **聚合类** | STDDEV() | Returns the standard deviation of field values. | |
| **聚合类** | SUM() | Returns the sum of field values. | |
| **查询选择类** | BOTTOM() | Returns the smallest `N` field values. | `BOTTOM()` returns the field value with the earliest timestamp if there's a tie between two or more values for the smallest value. |
| **查询选择类** | FIRST() | Returns the field value with the oldest timestamp. | |
| **查询选择类** | LAST() | Returns the field value with the most recent timestamp. | |
| **查询选择类** | MAX() | Returns the greatest field value. | |
| **查询选择类** | MIN() | Returns the lowest field value. | |
| **查询选择类** | PERCENTILE() | Returns the `N`th percentile field value. | |
| **查询选择类** | SAMPLE() | Returns a random sample of `N` field values. | `SAMPLE()` uses reservoir sampling to generate the random points. |
| **查询选择类** | TOP() | Returns the greatest `N` field values. | `TOP()` returns the field value with the earliest timestamp if there's a tie between two or more values for the greatest value. |
| **转换类** | ABS() | Returns the absolute value of the field value. | |
| **转换类** | ACOS() | Returns the arccosine (in radians) of the field value. | Field values must be between -1 and 1. |
| **转换类** | ASIN() | Returns the arcsine (in radians) of the field value. | Field values must be between -1 and 1. |
| **转换类** | ATAN() | Returns the arctangent (in radians) of the field value. | Field values must be between -1 and 1. |
| **转换类** | ATAN2() | Returns the the arctangent of `y/x` in radians. | |
| **转换类** | CEIL() | Returns the subsequent value rounded up to the nearest integer. | |
| **转换类** | COS() | Returns the cosine of the field value. | |
| **转换类** | CUMULATIVE_SUM() | Returns the running total of subsequent field values. | |
| **转换类** | DERIVATIVE() | Returns the rate of change between subsequent field values. | InfluxDB calculates the difference between subsequent field values and converts those results into the rate of change per `unit`. The `unit` argument is an integer followed by a duration literal and it is optional. If the query does not specify the `unit` the unit defaults to one second (`1s`). |
| **转换类** | DIFFERENCE() | Returns the result of subtraction between subsequent field values. | |
| **转换类** | ELAPSED() | Returns the difference between subsequent field value's timestamps. | InfluxDB calculates the difference between subsequent timestamps. The `unit` option is an integer followed by a duration literal and it determines the unit of the returned difference. If the query does not specify the `unit` option the query returns the difference between timestamps in nanoseconds. |
| **转换类** | EXP() | Returns the exponential of the field value. | |
| **转换类** | FLOOR() | Returns the subsequent value rounded down to the nearest integer. | |
| **转换类** | LN() | Returns the natural logarithm of the field value. | |
| **转换类** | LOG() | Returns the logarithm of the field value with base `b`. | |
| **转换类** | LOG2() | Returns the logarithm of the field value to the base 2. | |
| **转换类** | LOG10() | Returns the logarithm of the field value to the base 10. | |
| **转换类** | MOVING_AVERAGE() | Returns the rolling average across a window of subsequent field values. | |
| **转换类** | POW() | Returns the field value to the power of `x` | |
| **转换类** | ROUND() | Returns the subsequent value rounded to the nearest integer. | |
| **转换类** | SIN() | Returns the sine of the field value. | |
| **转换类** | SQRT() | Returns the square root of field value. | |
| **转换类** | TAN() | Returns the tangent of the field value. | |
| **推测类** | HOLT_WINTERS() | Returns N number of predicted field values | Predict when data values will cross a given threshold; Compare predicted values with actual values to detect anomalies in your data. |
| **技术分析类** | CHANDE_MOMENTUM_OSCILLATOR() | | The Chande Momentum Oscillator (CMO) is a technical momentum indicator developed by Tushar Chande. The CMO indicator is created by calculating the difference between the sum of all recent higher data points and the sum of all recent lower data points, then dividing the result by the sum of all data movement over a given time period. The result is multiplied by 100 to give the -100 to +100 range. |
| **技术分析类** | EXPONENTIAL_MOVING_AVERAGE() | | An exponential moving average (EMA) is a type of moving average that is similar to a simple moving average, except that more weight is given to the latest data. It's also known as the "exponentially weighted moving average." This type of moving average reacts faster to recent data changes than a simple moving average. |
| **技术分析类** | DOUBLE_EXPONENTIAL_MOVING_AVERAGE() | | The Double Exponential Moving Average (DEMA) attempts to remove the inherent lag associated to Moving Averages by placing more weight on recent values. The name suggests this is achieved by applying a double exponential smoothing which is not the case. The name double comes from the fact that the value of an EMA is doubled. To keep it in line with the actual data and to remove the lag, the value "EMA of EMA" is subtracted from the previously doubled EMA. |
| **技术分析类** | KAUFMANS_EFFICIENCY_RATIO() | | Kaufman's Efficiency Ration, or simply "Efficiency Ratio" (ER), is calculated by dividing the data change over a period by the absolute sum of the data movements that occurred to achieve that change. The resulting ratio ranges between 0 and 1 with higher values representing a more efficient or trending market. The ER is very similar to the Chande Momentum Oscillator (CMO). The difference is that the CMO takes market direction into account, but if you take the absolute CMO and divide by 100, you you get the Efficiency Ratio. |
| **技术分析类** | KAUFMANS_ADAPTIVE_MOVING_AVERAGE() | | Kaufman's Adaptive Moving Average (KAMA) is a moving average designed to account for sample noise or volatility. KAMA will closely follow data points when the data swings are relatively small and noise is low. KAMA will adjust when the data swings widen and follow data from a greater distance. This trend-following indicator can be used to identify the overall trend, time turning points and filter data movements. |
| **技术分析类** | TRIPLE_EXPONENTIAL_MOVING_AVERAGE() | | The triple exponential moving average (TEMA) was developed to filter out volatility from conventional moving averages. While the name implies that it's a triple exponential smoothing, it's actually a composite of a single exponential moving average, a double exponential moving average, and a triple exponential moving average. |
| **技术分析类** | TRIPLE_EXPONENTIAL_DERIVATIVE() | | The triple exponential derivative indicator, commonly referred to as "TRIX," is an oscillator used to identify oversold and overbought markets, and can also be used as a momentum indicator. TRIX calculates a triple exponential moving average of the log of the data input over the period of time. The previous value is subtracted from the previous value. This prevents cycles that are shorter than the defined period from being considered by the indicator. Like many oscillators, TRIX oscillates around a zero line. When used as an oscillator, a positive value indicates an overbought market while a negative value indicates an oversold market. When used as a momentum indicator, a positive value suggests momentum is increasing while a negative value suggests momentum is decreasing. Many analysts believe that when the TRIX crosses above the zero line it gives a buy signal, and when it closes below the zero line, it gives a sell signal. |
| **技术分析类** | RELATIVE_STRENGTH_INDEX() | | The relative strength index (RSI) is a momentum indicator that compares the magnitude of recent increases and decreases over a specified time period to measure speed and change of data movements. |
参考网址:
https://blog.csdn.net/xuxiannian/article/details/103559246
https://blog.csdn.net/funnyPython/article/details/89888972
https://docs.influxdata.com/influxdb/v1.8/query_language/explore-data/
https://docs.influxdata.com/influxdb/v1.8/query_language/manage-database/drop-series-from-the-index-with-drop-series
https://docs.influxdata.com/influxdb/v1.8/query_language/functions/
https://help.aliyun.com/document_detail/113127.html?spm=5176.21213303.J_6704733920.12.345d3eda8r81jQ\&scm=20140722.S_help%40%40%E6%96%87%E6%A1%A3%40%40113127.S_0%2Bos.ID_113127-RL_show%20tag%20values-OR_helpmain-V_2-P0_1
原文创作:东山絮柳仔
原文链接:https://www.cnblogs.com/xuliuzai/p/14711334.html
文章列表
- 通过Python收集MySQL MHA 部署及运行状态信息的功能实现
- 通过Python将监控数据由influxdb写入到MySQL
- 通过Python实现生成excel并邮件发送的功能
- 通过Python实现对SQL Server 数据文件大小的监控告警
- 通过 Telegraf + InfluxDB + Grafana 快速搭建监控体系的详细步骤
- 迁移Report Server DataBase时遇到的坑
- 谨慎 mongodb 关于数字操作可能导致类型及精度变化
- 详解MongoDB中的多表关联查询$lookup
- 瞧一瞧!这儿实现了MongoDB的增量备份与还原含部署代码
- 时序数据库InfluxDB的基本语法
- 数据库服务器资源使用情况周报
- 数据库如何应对保障大促活动
- 我10亿级ES数据迁到MongoDB节省90%成本!转载
- 应用部署架构演进转载 -
- 学习ProxySQL参考到几个网址
- 名言小抄五
- 关于SQL Server 镜像数据库快照的创建及使用
- 关于SQL Server 数据库归档的一些思考和改进
- 关于MongoDB时间格式转换和时间段聚合统计的用法总结
- 以实现MongoDB副本集状态的监控为例,看Telegraf系统中Exec输入插件如何编写部署
- 一个磁盘I/O故障导致的AlwaysOn FailOver 过程梳理和分析
- shell 操作钉钉机器人实现告警提醒
- python 学习笔记 四
- python 学习 三
- kapacitor的安装及部分常用命令
- TiDB 架构及设计实现
- TiDB 学习笔记一运维管理
- SQL Server 查看当前会话状态sp_WhoIsActive 转载 -
- SQL Server DB迁移工作List
- SQL Server CPU 利用率毛刺的分析定位与解决
- SQL Server Alwayson架构下 服务器 各虚拟IP漂移监控告警的功能实现 1服务器视角
- Python发送的邮件设置收件人隐藏与显示
- MySQL索引设计需要考虑哪些因素?
- MySQL日志收集之Filebeat和Logstsh的一键安装配置ELK架构
- MySQL数据归档小工具推荐及优化mysql_archiver
- MySQL数据库规范 设计规范+开发规范+操作规范
- MySQL数据库Inception工具学习与测试 笔记
- MySQL常用命令汇总偏向运维管理
- MySQL在线DDL工具 ghost
- MySQL 学习笔记四
- MySQL 学习笔记三
- MySQL alter table时执行innobackupex全备再谈Seconds_Behind_Master
- MySQL MHA 运行状态监控
- MongoDB数据库的设计规范
- MongoDB实例重启失败探究大事务Redo导致
- MongoDB 那些事一文以蔽之
- MongoDB 中的加减乘除 - 运算
- MongoDB 中数据的替换方法实现 类Replace函数功能
- Linux常用命令总结二
- K8S基础学习