Monitor CEPH with ZABBIX
Monitor CEPH with ZABBIX
Environment: – CEPH Liminou – Install Zabbix-server-mysql zabbix-server-web – Installed Ceph MGR
Topology CEPH monitor by Zabbix
Install by Doccument of Zabbix
https://www.zabbix.com/download
Option 1: Using CEPH Mgr Module zabbix to send data collection to ZABBIX (Not OK)
Step 1: Enable and setup config module zabbix on MGR
ceph mgr module enable zabbix # Check module installed ceph mgr module ls ceph zabbix config- set zabbix_host 172.16.0.135 Configuration option zabbix_host updated ceph zabbix config- set identifier 172.16.0.136 Configuration option identifier updated # Find Zabbix Sender which zabbix_sender --> /usr/bin/zabbix_sender ceph zabbix config- set zabbix_sender /usr/bin/zabbix_sender Configuration option zabbix_sender updated ceph zabbix config- set zabbix_port 10051 Configuration option zabbix_port updated ceph zabbix config- set interval 60 Configuration option interval updated ceph zabbix config-show { "zabbix_host" : "172.16.0.135" , "identifier" : "172.16.0.136" , "zabbix_sender" : "/usr/bin/zabbix_sender" , "interval" : 60, "zabbix_port" : 10051} # Check and save as template Zabbix on Local Server rpm -ql ceph-mgr | grep xml /usr/lib64/ceph/mgr/zabbix/zabbix_template .xml cat zabbix_temaplte.xml <?xml version= "1.0" encoding= "UTF-8" ?> <zabbix_export> <version>2.0< /version > <- 修改成 2.0 即可导入 |
Step 2: Import zabbix_template.xml to Zabbix
Import xml
Configuration/Templates/Import/
Create Host using template from xml
Configuration/Hosts/Create host/
# In Template tab add template import before
Allow host by update MySQL Databases;
With Trapper allowed Host we can be Update on MySQL Database
MySQL [(none)]> use zabbix;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MySQL [zabbix]> select * hostid from hosts where name='Name of hostname';
+--------+
| hostid |
+--------+
| 10xxx |
+--------+
1 row in set (0.00 sec)
MySQL [zabbix]> select itemid, name, key_, type, trapper_hosts from items where hostid=10xxx;
+--------+-----------------------------------------------+-----------------------------+------+---------------+
| itemid | name | key_ | type | trapper_hosts |
+--------+-----------------------------------------------+-----------------------------+------+---------------+
| 35793 | Number of Monitors | ceph.num_mon | 2 | |
| 35794 | Number of OSDs | ceph.num_osd | 2 | |
| 35795 | Number of OSDs in state: IN | ceph.num_osd_in | 2 | |
| 35796 | Number of OSDs in state: UP | ceph.num_osd_up | 2 | |
| 35797 | Number of Placement Groups | ceph.num_pg | 2 | |
| 35798 | Number of Placement Groups in Temporary state | ceph.num_pg_temp | 2 | |
| 35799 | Number of Pools | ceph.num_pools | 2 | |
| 35800 | Ceph OSD avg fill | ceph.osd_avg_fill | 2 | |
| 35801 | Ceph backfill full ratio | ceph.osd_backfillfull_ratio | 2 | |
| 35802 | Ceph full ratio | ceph.osd_full_ratio | 2 | |
| 35803 | Ceph OSD Apply latency Avg | ceph.osd_latency_apply_avg | 2 | |
| 35804 | Ceph OSD Apply latency Max | ceph.osd_latency_apply_max | 2 | |
| 35805 | Ceph OSD Apply latency Min | ceph.osd_latency_apply_min | 2 | |
| 35806 | Ceph OSD Commit latency Avg | ceph.osd_latency_commit_avg | 2 | |
| 35807 | Ceph OSD Commit latency Max | ceph.osd_latency_commit_max | 2 | |
| 35808 | Ceph OSD Commit latency Min | ceph.osd_latency_commit_min | 2 | |
| 35809 | Ceph OSD max fill | ceph.osd_max_fill | 2 | |
| 35810 | Ceph OSD min fill | ceph.osd_min_fill | 2 | |
| 35811 | Ceph nearfull ratio | ceph.osd_nearfull_ratio | 2 | |
| 35812 | Overall Ceph status | ceph.overall_status | 2 | |
| 35813 | Overal Ceph status (numeric) | ceph.overall_status_int | 2 | |
| 35814 | Ceph Read bandwidth | ceph.rd_bytes | 2 | |
| 35815 | Ceph Read operations | ceph.rd_ops | 2 | |
| 35816 | Total bytes available | ceph.total_avail_bytes | 2 | |
| 35817 | Total bytes | ceph.total_bytes | 2 | |
| 35818 | Total number of objects | ceph.total_objects | 2 | |
| 35819 | Total bytes used | ceph.total_used_bytes | 2 | |
| 35820 | Ceph Write bandwidth | ceph.wr_bytes | 2 | |
| 35821 | Ceph Write operations | ceph.wr_ops | 2 | |
+--------+-----------------------------------------------+-----------------------------+------+---------------+
29 rows in set (0.00 sec)
MySQL [zabbix]> update items set trapper_hosts='127.0.0.1,172.16.0.135' where hostid=10xxx;
Query OK, 29 rows affected (0.00 sec)
Rows matched: 29 Changed: 29 Warnings: 0
MySQL [zabbix]> select itemid, name, key_, type, trapper_hosts from items where hostid=10xxx;
+--------+-----------------------------------------------+-----------------------------+------+------------------------+
| itemid | name | key_ | type | trapper_hosts |
+--------+-----------------------------------------------+-----------------------------+------+------------------------+
| 35793 | Number of Monitors | ceph.num_mon | 2 | 127.0.0.1,172.16.0.135 |
| 35794 | Number of OSDs | ceph.num_osd | 2 | 127.0.0.1,172.16.0.135 |
| 35795 | Number of OSDs in state: IN | ceph.num_osd_in | 2 | 127.0.0.1,172.16.0.135 |
| 35796 | Number of OSDs in state: UP | ceph.num_osd_up | 2 | 127.0.0.1,172.16.0.135 |
| 35797 | Number of Placement Groups | ceph.num_pg | 2 | 127.0.0.1,172.16.0.135 |
| 35798 | Number of Placement Groups in Temporary state | ceph.num_pg_temp | 2 | 127.0.0.1,172.16.0.135 |
| 35799 | Number of Pools | ceph.num_pools | 2 | 127.0.0.1,172.16.0.135 |
| 35800 | Ceph OSD avg fill | ceph.osd_avg_fill | 2 | 127.0.0.1,172.16.0.135 |
| 35801 | Ceph backfill full ratio | ceph.osd_backfillfull_ratio | 2 | 127.0.0.1,172.16.0.135 |
| 35802 | Ceph full ratio | ceph.osd_full_ratio | 2 | 127.0.0.1,172.16.0.135 |
| 35803 | Ceph OSD Apply latency Avg | ceph.osd_latency_apply_avg | 2 | 127.0.0.1,172.16.0.135 |
| 35804 | Ceph OSD Apply latency Max | ceph.osd_latency_apply_max | 2 | 127.0.0.1,172.16.0.135 |
| 35805 | Ceph OSD Apply latency Min | ceph.osd_latency_apply_min | 2 | 127.0.0.1,172.16.0.135 |
| 35806 | Ceph OSD Commit latency Avg | ceph.osd_latency_commit_avg | 2 | 127.0.0.1,172.16.0.135 |
| 35807 | Ceph OSD Commit latency Max | ceph.osd_latency_commit_max | 2 | 127.0.0.1,172.16.0.135 |
| 35808 | Ceph OSD Commit latency Min | ceph.osd_latency_commit_min | 2 | 127.0.0.1,172.16.0.135 |
| 35809 | Ceph OSD max fill | ceph.osd_max_fill | 2 | 127.0.0.1,172.16.0.135 |
| 35810 | Ceph OSD min fill | ceph.osd_min_fill | 2 | 127.0.0.1,172.16.0.135 |
| 35811 | Ceph nearfull ratio | ceph.osd_nearfull_ratio | 2 | 127.0.0.1,172.16.0.135 |
| 35812 | Overall Ceph status | ceph.overall_status | 2 | 127.0.0.1,172.16.0.135 |
| 35813 | Overal Ceph status (numeric) | ceph.overall_status_int | 2 | 127.0.0.1,172.16.0.135 |
| 35814 | Ceph Read bandwidth | ceph.rd_bytes | 2 | 127.0.0.1,172.16.0.135 |
| 35815 | Ceph Read operations | ceph.rd_ops | 2 | 127.0.0.1,172.16.0.135 |
| 35816 | Total bytes available | ceph.total_avail_bytes | 2 | 127.0.0.1,172.16.0.135 |
| 35817 | Total bytes | ceph.total_bytes | 2 | 127.0.0.1,172.16.0.135 |
| 35818 | Total number of objects | ceph.total_objects | 2 | 127.0.0.1,172.16.0.135 |
| 35819 | Total bytes used | ceph.total_used_bytes | 2 | 127.0.0.1,172.16.0.135 |
| 35820 | Ceph Write bandwidth | ceph.wr_bytes | 2 | 127.0.0.1,172.16.0.135 |
| 35821 | Ceph Write operations | ceph.wr_ops | 2 | 127.0.0.1,172.16.0.135 |
+--------+-----------------------------------------------+-----------------------------+------+------------------------+
29 rows in set (0.00 sec)
Add Crontab of Ceph command zabbix send
[root@cephsvr-128040 ~]# cat /etc/cron.d/ceph
*/1 * * * * root ceph zabbix send
Step 3: Show Graphic on Zabbix Server
ceph pool
storage
ceph io
ceph bandwidth
bandwidth
ceph OSD latency
lantency
Option 2: Using Script and use zabbix agent collect data (OK)
Use repos share from zabbix – Monitor CEPH with ZABBIX
https://share.zabbix.com/cat-app/cluster/ceph
Install Zabbix-agent on Node CEPH 172.16.0.136-node1
NOTE:
Config zabbix_agent.conf:
# IP-address of Zabbix Server
Server=172.16.0.135
# IP-address of Zabbix Server for Zabbix Agent(active) monitoring
Server=172.16.0.135
# Host name of the client's
Hostname=node1
Import xml
Configuration/Templates/Import/
Create Host using template from xml
Configuration/Hosts/Create host/
# In Template tab add template import before
Show graphs… –
Khi cần hỗ trợ xin liên hệ với chúng tôi: Công ty phần mềm Nhân Hòa Trụ sở Hà Nội: Tầng 4 – Toà nhà 97 – 99 Láng Hạ, Đống Đa, Hà Nội
Chi nhánh HCM: 270 Cao Thắng (nối dài), Phường 12, Quận 10, TP HCM
Chi nhánh Vinh – Nghệ An: Tầng 2 Tòa nhà Sài Gòn Sky, ngõ 26 Nguyễn Thái Học, phường Đội Cung, TP. Vinh, Nghệ An Hotline: 19006680