跳转至

Exporter监控docker应用

在使用 Docker 运行应用时,怎么监控应用的指标是一个问题,通常情况下通过 exporter + prometheus 可以实现指标的监控。以下给出如何配置监控 Nginx 和 MySQL 的示例。

nginx+exporter获取监控数据

  1. 编写dockerfile
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# 编译阶段 命名为 exporter
FROM nginx/nginx-prometheus-exporter:latest as exporter
# 运行阶段
FROM nginx:alpine
COPY ./nginx/status.conf /etc/nginx/conf.d/status.conf
COPY --from=exporter /usr/bin/exporter /usr/bin/exporter
ADD run.sh /run.sh
RUN chmod 777 /run.sh
EXPOSE 80 9113
CMD ["/bin/sh", "/run.sh"]
  1. 编写run.sh:
1
2
3
4
5
#!/bin/bash
nginx -c /etc/nginx/nginx.conf
nginx -s reload
/usr/bin/exporter -nginx.scrape-uri http://127.0.0.1/stub_status
tail -f /dev/null #实现本shell永不运行完成,容器不退出。

通过docker命令执行生成image的指令如下:

1
docker build . -t nginx-exporter:v2

nginx+mysql+exporter实现prometheus监控

所涉及的相关文件

1
2
3
4
5
6
7
[root@ecs-82f5]~/make#  ll
total 24
-rw-r--r-- 1 root root  157 Jun 18 06:25 create_mysql_user.sql
-rw-r--r-- 1 root root  448 Jun 19 04:21 Dockerfile
-rw-r--r-- 1 root root  148 Jun 18 01:32 nginx_status.conf
-rw-r--r-- 1 root root 6954 Jun 18 05:53 prometheus-mysqld-exporter
-rw-r--r-- 1 root root  340 Jun 19 04:21 start.sh

各文件对应的内容:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
[root@ecs-82f5]~/make# more Dockerfile
FROM ubuntu:latest

RUN apt-get update \
    && apt-get -y install nginx prometheus-nginx-exporter mysql-server prometheus-mysqld-exporter

COPY nginx_status.conf /etc/nginx/sites-enabled/nginx_status.conf
COPY prometheus-mysqld-exporter /etc/default/prometheus-mysqld-exporter
COPY create_mysql_user.sql /tmp/create_mysql_user.sql
COPY start.sh /opt/start.sh

EXPOSE 80 9113 9104
ENTRYPOINT ["/bin/bash","/opt/start.sh"]
#ENTRYPOINT ["/bin/bash"]

[root@ecs-82f5]~/make# cat nginx_status.conf
server {
      listen 8080;
      server_name  localhost;
      location /stub_status {
         stub_status on;
         access_log off;
      }
}

[root@ecs-82f5]~/make# cat create_mysql_user.sql
CREATE USER prometheus@localhost IDENTIFIED BY 'StrongPassword';
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO prometheus@localhost;
FLUSH PRIVILEGES;


[root@ecs-82f5]~/make# cat start.sh
#!/bin/bash
# -------------------------------
set -e

service nginx start
/etc/init.d/prometheus-nginx-exporter start
#/usr/bin/prometheus-nginx-exporter &

#mkdir -p /nonexistent
service mysql start

# 下行一定要放后台执行,不然重启或stop以后start会报错
mysql < /tmp/create_mysql_user.sql &
export DATA_SOURCE_NAME="prometheus@unix(/run/mysqld/mysqld.sock)/"
/usr/bin/prometheus-mysqld-exporter


[root@ecs-82f5]~/make# cat prometheus-mysqld-exporter
ARGS=""
### Database authentication
#
# By default the DATABASE connection string will be read from
# the file specified with the -config.my-cnf parameter.  For example:
# ARGS='--config.my-cnf /etc/mysql/debian.cnf'
#
# Note that SSL options can only be set using a cnf file.

# To set a connection string from the environment instead, set the
# DATA_SOURCE_NAME variable.

# To use UNIX domain sockets authentication with or without password:
# DATA_SOURCE_NAME="prometheus:nopassword@unix(/run/mysqld/mysqld.sock)/"
DATA_SOURCE_NAME="prometheus@unix(/run/mysqld/mysqld.sock)/"

# To use a TCP connection and password authentication:
# DATA_SOURCE_NAME="prometheus:password@(hostname:port)/dbname"

### Monitoring user creation.
#
# You need a user with enough privileges for the exporter to run.
#
# Example to create a user to connect (only) via UNIX socket:
#   CREATE USER IF NOT EXISTS 'prometheus'@'localhost' IDENTIFIED WITH auth_socket;
#
# To create a user with a password, that can log in via UNIX or TCP sockets:
#   CREATE USER IF NOT EXISTS 'prometheus'@'localhost' IDENTIFIED BY 'password';
#
# Finally, to grant the necessary privileges:
#   GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'prometheus'@'localhost';

### Availabe command-line arguments to pass to the exporter.
#
# General options:

#  --config.my-cnf="${HOME}/.my.cnf"
#      Path to .my.cnf file to read MySQL credentials from.
#  --exporter.lock_wait_timeout=2
#      Set a lock_wait_timeout on the connection to avoid long metadata
#      locking.
#  --exporter.log_slow_filter
#      Add a log_slow_filter to avoid slow query logging of scrapes.
#      NOTE: Not supported by Oracle MySQL.
#  --log.level="info"
#      Only log messages with the given severity or above.
#      Valid levels: [debug, info, warn, error, fatal].
#  --log.format="logger:stderr"
#      Set the log target and format.
#      Example: "logger:syslog?appname=bob&local=7" or
#      "logger:stdout?json=true"
#  --timeout-offset=0.25
#      Offset to subtract from timeout in seconds.
#  --web.listen-address=":9104"
#      Address to listen on for web interface and telemetry.
#  --web.telemetry-path="/metrics"
#      Path under which to expose metrics.

# Collectior options:

#  --collect.auto_increment.columns
#      Collect auto_increment columns and max values from information_schema.
#  --collect.binlog_size
#      Collect the current size of all registered binlog files.

#  --collect.engine_innodb_status
#      Collect from SHOW ENGINE INNODB STATUS.

#  --collect.engine_tokudb_status
#      Collect from SHOW ENGINE TOKUDB STATUS.

#  --collect.global_status
#      Collect from SHOW GLOBAL STATUS.

#  --collect.global_variables
#      Collect from SHOW GLOBAL VARIABLES.

#  --collect.heartbeat
#      Collect from heartbeat.
#  --collect.heartbeat.database="heartbeat"
#      Database from where to collect heartbeat data.
#  --collect.heartbeat.table="heartbeat"
#      Table from where to collect heartbeat data.

#  --collect.info_schema.clientstats
#      If running with userstat=1, set to true to collect client statistics.

#  --collect.info_schema.innodb_cmp
#      Collect metrics from information_schema.innodb_cmp.

#  --collect.info_schema.innodb_cmpmem
#      Collect metrics from information_schema.innodb_cmpmem.

#  --collect.info_schema.innodb_metrics
#      Collect metrics from information_schema.innodb_metrics.

#  --collect.info_schema.innodb_tablespaces
#      Collect metrics from information_schema.innodb_sys_tablespaces.

#  --collect.info_schema.processlist
#      Collect current thread state counts from the
#      information_schema.processlist.
#  --collect.info_schema.processlist.min_time=0
#      Minimum time a thread must be in each state to be counted.
#  --collect.info_schema.processlist.processes_by_host
#      Enable collecting the number of processes by host.
#  --collect.info_schema.processlist.processes_by_user
#      Enable collecting the number of processes by user.

#  --collect.info_schema.query_response_time
#      Collect query response time distribution if query_response_time_stats is
#      ON..

#  --collect.info_schema.schemastats
#      If running with userstat=1, set to true to collect schema statistics.

#  --collect.info_schema.tables
#      Collect metrics from information_schema.tables.
#  --collect.info_schema.tables.databases="*"
#      The list of databases to collect table stats for, or '*' for all.

#  --collect.info_schema.tablestats
#      If running with userstat=1, set to true to collect table statistics.

#  --collect.info_schema.userstats
#      If running with userstat=1, set to true to collect user statistics.

#  --collect.mysql.user
#      Collect data from mysql.user
#  --collect.mysql.user.privileges
#      Enable collecting user privileges from mysql.user.

#  --collect.perf_schema.eventsstatements
#      Collect metrics from
#      performance_schema.events_statements_summary_by_digest.
#  --collect.perf_schema.eventsstatements.digest_text_limit=120
#      Maximum length of the normalized statement text.
#  --collect.perf_schema.eventsstatements.limit=250
#      Limit the number of events statements digests by response time.
#  --collect.perf_schema.eventsstatements.timelimit=86400
#      Limit how old the 'last_seen' events statements can be, in seconds.

#  --collect.perf_schema.eventsstatementssum
#      Collect metrics of grand sums from
#      performance_schema.events_statements_summary_by_digest.

#  --collect.perf_schema.eventswaits
#      Collect metrics from
#      performance_schema.events_waits_summary_global_by_event_name.

#  --collect.perf_schema.file_events
#      Collect metrics from performance_schema.file_summary_by_event_name.

#  --collect.perf_schema.file_instances
#      Collect metrics from performance_schema.file_summary_by_instance.
#  --collect.perf_schema.file_instances.filter=".*"
#      RegEx file_name filter for performance_schema.file_summary_by_instance.
#  --collect.perf_schema.file_instances.remove_prefix="/var/lib/mysql/"
#      Remove path prefix in performance_schema.file_summary_by_instance.

#  --collect.perf_schema.indexiowaits
#      Collect metrics from
#      performance_schema.table_io_waits_summary_by_index_usage.

#  --collect.perf_schema.replication_applier_status_by_worker
#      Collect metrics from
#      performance_schema.replication_applier_status_by_worker.

#  --collect.perf_schema.replication_group_member_stats
#      Collect metrics from performance_schema.replication_group_member_stats.

#  --collect.perf_schema.tableiowaits
#      Collect metrics from performance_schema.table_io_waits_summary_by_table.

#  --collect.perf_schema.tablelocks
#      Collect metrics from
#      performance_schema.table_lock_waits_summary_by_table.

#  --collect.slave_hosts
#      Scrape information from 'SHOW SLAVE HOSTS'.

#  --collect.slave_status
#      Collect from SHOW SLAVE STATUS.

捐赠本站(Donate)

weixin_pay
如您感觉文章有用,可扫码捐赠本站!(If the article useful, you can scan the QR code to donate))