Monitoring for OpenStack - A practical HOWTO with Sensu
ATTENTION: This playbook is in the process of being fully released on github and may not work as-is.
It is also very RHEL-centric although it will likely work on Centos/RDO as well. Please bear with me as I make this more useable. Thank you.
Contacts
Contributors
- Graeme Gillis
- Gaetan Trellu
- Alexandre Maumene
- Cyril Lopez
- Gaël Lambert
- Guillaume coré
Description
Introduction
As of OSP7/OSP8, RHEL OSP uses the tripleo upstream code to deploy Openstack using a minimal (but critical) Openstack called the 'undercloud'. [1]
I won't go into the specifics of this kind of deployment but suffice it to say that the most simple OSP setup instantly becomes, well... quite 'convoluted'.
At the same time, all of the different subsystems, nodes and endpoints are deployed without alerting, monitoring and graphing, leaving it to
the customer to deploy his/her monitoring framework on top of OpenStack.
Some simple alerting and monitoring based on 'sensu' (http://www.sensuapp.org) is scheduled to find its way in OSP10.
Also, Graeme Gillis from the Openstack Operations Engineering Team [2] was nice enough to put some great resources for those wanting to deploy
alerting, monitoring and graphing on OSP7. [3], [4], [6] and [7]
This small project [5] aims to build upon these tools and procedures to provide an out-of-the-box alerting framework for an OpenStack cloud.
Please remember that it is a work in progress and subject to change without notice. All comments/improvements/questions are thus welcomed.
Logical Architecture
Sensu was selected for this project because it:
1) is already popular within the OpenStack community.
2) is already selected for OSP8 as the alerting framework. ([6] and [7])
Here's a diagram describing the logical architecture (Thanks Graeme Gillis):
The logical architecture deployed by the tooling described in this document includes a single Sensu server for the entire Undercloud and Overcloud.
While it might be feasible to deploy an HA-enabled Sensu configuration or a redundant Sensu architecure with several Sensu servers, it is outside of the scope of this document.
Technical Architecture
The Sensu server may be a Virtual Machine (KVM, VirtualBox, etc..) or a physical machine. We'll be only describing the KVM-Based Virtual machine setup within this document.
While the most obvious requirement is that the Sensu server runs RHEL7.x and has at least 8GB RAM, the most ubiquitous pre-requisite is related to the network: Your Sensu server -must- have access to the heart of your OverCloud.
This means: the control-plane, the provisioning network AND the OOB network (to monitor your IPMI access points into your overcloud nodes).
Therefore, it makes sense to build your Sensu server and your Undercloud machine alike.
If your undercloud machine is a KVM guest, it makes sense to create your Sensu Server as a KVM guest using the exact same bridges/networks on the same Hypervisor.
This setup is described here: a KVM Hypervisor with two KVM guests : the undercloud and the Sensu server.
If your undercloud machine is another type of VM (VBox, VMware, etc..), you'll have to do some network planning prior to installing your Sensu server and figure out the networks by yourself.
Here's an example of an OSP7 cloud after OSP-D installation (Thanks to Dimitri Savinea for the original Dia):
And here is the same OSP7 cloud with the Sensu Server added as a VM on the same Hypervisor as the Undercloud
(notice the pink box underneath the undercloud in the top-right corner)
Some Screenshots
The uchiwa dashboard is the Operator's interface to the Sensu server.
The Operator is first prompted to login to uchiwa using the credentials from the playbook (more on this later)
After logging in, the 'Events' dashboard is displayed (note the buttons to the left to navigate the views).
We notice a warning on the 'over_ceilometer_api' check out of 72 checks configured on the Cloud.
Clicking on 'Clients' brings up the list of registered clients and their keepalive status. Here we have 18 clients and 72 different check types.
Clicking on a Client brings up the detailed view of the checks being performed on this client (here with sensu 0.16.0)
Sensu is a work in progress. Features are added and bugs fixed as new versions are released. Here's the same Client with Sensu 0.20.6:
Installation Howto
Create the Sensu VM on the appropriate Hypervisor
We will be creating a Sensu server on the same hypervisor as the undercloud and we will copy the network configuration from the latter.
This will happen once your entire cloud is deployed as we need the services to be up in order to check them.
For quickstarters, you could also 'clone' the undercloud VM and uninstall its OSP packages (with the VM's network down, of course)
Setup the skeleton for the Sensu server VM
1. Download a RHEL guest image (See [6]):
$ ls -la images/rhel-guest-image-7.2-20151102.0.x86_64.qcow2-rw-r-----. 1 vcojot vcojot 474909696 Jan 7 11:55 images/rhel-guest-image-7.2-20151102.0.x86_64.qcow2
2. Customize your RHEL guest image on your RHEL7 or Fedora box
(Adapt the sample script provided with the ansible role to fit your needs)
Replace my SSH pub Key with yours. Also replace the UNIX password for the admin acount by the one generated at the previous step.
The following script provides you with a RHEL7.2 guest image which includes most of the requirements. The following script is meant for Fedora.
a) Modify the rhel7.2 image on Fedora (Fedora Only, for RHEL7 please see below).
First, create your credentials file:
$ cat .sm_credsSM_USER="myCDNuser"SM_PASSWD="myCDNpassword"SM_POOD_ID="8a85b2789a071c01407d7bc5ed98"STACK_SSH_KEY="ssh-rsa AAAAB3NzaC1yc2EA...... user@mymachine"
Next, run the provided/adapted script:
$ ./ansible/ansible/tools/virt_customize_sensu.shlibguestfs-tools-c-1.32.3-2.fc23.x86_64libguestfs-xfs-1.32.3-2.fc23.x86_64./ansible/ansible/tools/virt_customize_sensu.sh: line 21: [: too many arguments‘images/rhel-guest-image-7.2-20151102.0.x86_64.qcow2’ -> ‘rhel-7.2-guest-sensu.x86_64.qcow2’Image resized.‘rhel-7.2-guest-sensu.x86_64.qcow2’ -> ‘_tmpdisk.qcow2’[ 0.0] Examining rhel-7.2-guest-sensu.x86_64.qcow2**********Summary of changes:/dev/sda1: This partition will be resized from 6.0G to 128.0G. Thefilesystem xfs on /dev/sda1 will be expanded using the 'xfs_growfs' method.**********[ 4.1] Setting up initial partition table on _tmpdisk.qcow2[ 4.2] Copying /dev/sda1...................
Please note that the above script is only provided as a convenience and should only be used if there aren't ready-use image available.
You'll also need to setup RH subscription on the sensu VMs.
b) if using RHEL7, copy the RHEL7.2 image and run the required ansible playbook.
If using this method, then simply download the rhel-7.2 guest image, and proceed further when it's ready to be installed.
That VM will need to be subscribed to the proper CDN channels.
c) Check your results.
If all goes well, this should provide your with a ready-to-use QCOW, which we'll use later.
$ ls -la rhel-7.2-guest-sensu.x86_64.qcow2-rw-r-----. 1 vcojot vcojot 2009989120 Jan 21 13:00 rhel-7.2-guest-sensu.x86_64.qcow2
Alternatively, you could also deploy any RHEL7 VM and use Gaetan's ansible playbook to perform the above tasks [8]
4. Copy the guest image to your Hypervisor
Upload this file to your KVM host and place it under /var/lib/libvirt/images.
Integrate the sensu VM with your cloud (network and credentials)
We will be copying the network configuration from the instack VM since we are deploying a 'sibling' VM.
WARNING: The actual network configuration of the instack and sensu VM's varies from deployment to deployment.
The walkthrough below will probably give you a rough idea and you will have to adapt this to you actual network configuration.
1. List the undercloud's network config:
Let's become root on the hypervisor and see what we have:
[root@kvm1 ~]# virsh list --allId Name State----------------------------------------------------2 sc-instack running
sc-instack is the 'undercloud' VM, we want to copy that configuration to the new Sensu VM.
Let's look at the network configuration (I have highlighted the relevant information for our example):
[root@kvm1 ~]# virsh domiflist 2Interface Type Source Model MAC-------------------------------------------------------
vnet0 bridge br3115 virtio 52:54:00:27:b6:f4
vnet1 bridge br2320 virtio 52:54:00:5b:b5:fb
vnet2 bridge brpxe virtio 52:54:00:85:7a:01
So we have 'br3115', 'br2320' and 'brpxe', in that order. These will be 'eth0', 'eth1' and 'eth2'.
You'll have to create/pick/compute 3 new MAC addresses (shown in green) as we'll be adding 3 network interfaces.
Let's use all that with our newly created QCOW image.
2. Install and boot your Sensu VM
[root@kvm1 ~]# virt-install --boot hd --connect qemu:///system --name sensu01 --ram 16384 --vcpus=8 --cpu kvm64 --virt-type kvm --hvm --accelerate --noautoconsole \--network=bridge:br3115,model=virtio \--network=bridge:br2320,model=virtio \--network=bridge:brpxe,model=virtio \--serial pty --os-type linux \--disk path=/var/lib/libvirt/images/sensu.x86_64.qcow2,format=qcow2,bus=virtio,cache=none
[root@kvm1 ~]# virsh console sensu01Connected to domain sensu01Escape character is ^]Employee SKUKernel 3.10.0-327.el7.x86_64 on an x86_64sensu01 login:
3. Reserve some IPs for the Sensu server on your both your undercloud and overcloud
Login to your undercloud and source the proper rc files
(one for the undercloud, one for the overcloud).
Identify the 'internal_api' and the 'ctlplane' networks, they are two of the bridges we identified earlier ('br2320' and 'brpxe', respectively).
These will get mapped to your Sensu server on 'eth1' and 'eth2', eth0 being the outside network interface.
[stack@sc-instack ~]$ . stackrc[stack@sc-instack ~]$ neutron net-list+--------------------------------------+--------------+--------------------------------------------------------+| id | name | subnets |+--------------------------------------+--------------+--------------------------------------------------------+| 175c21a7-9858-412a-bb7a-6763bf6d84ee | storage_mgmt | 967dcecb-73e4-476f-ba21-eba91d551823 10.1.33.0/24 || 44bb7c18-2ba6-49ab-b344-7d644bb3110f | internal_api | fc3ec57c-ff10-40b6-9b63-d6293bfe6ee1 10.1.20.0/24 || 75cbd5c2-aee3-47be-a4eb-b355d1edb281 | storage | cb738311-89f0-4543-a850-b1258c1a6d6c 10.1.32.0/24 || 207a3108-e341-4360-b433-bfd6007cc59d | ctlplane | 7e1e052f-b4eb-4f3b-8b1c-ba298cbe530f 10.20.30.0/24 || bf45910d-36e9-43f7-9802-1545d7182608 | tenant | 9ad085d8-e185-4d73-8721-8a2ef0ce5e87 10.1.31.0/24 || c7f74ecb-ff08-49da-9d8b-f3070fbcbcee | external | 8d915163-43ec-431c-81ab-841750682475 192.168.0.32/27 |+--------------------------------------+--------------+--------------------------------------------------------+
Now it's time to get some IP's on these two subnets (Hint: use 'neutron port-list|grep <subnet_id>' )
Reserve an unused IP on the 'internal_api' network (I picked IP <subnet>.42 because it was available )
[stack@sc-instack ~]$ neutron port-create --fixed-ip ip_address=10.1.30.42 44bb7c18-2ba6-49ab-b344-7d644bb3110f (internal_api)Created a new port:+-----------------------+-------------------------------------------------------------------------------------+| Field | Value |+-----------------------+-------------------------------------------------------------------------------------+| admin_state_up | True || allowed_address_pairs | || binding:host_id | || binding:profile | {} || binding:vif_details | {} || binding:vif_type | unbound || binding:vnic_type | normal || device_id | || device_owner | || fixed_ips | {"subnet_id": "fc3ec57c-ff10-40b6-9b63-d6293bfe6ee1", "ip_address": "10.154.20.42"} || id | b6e0bdd9-aac4-4689-8f31-a0c0bf2c1324 || mac_address | 52:54:00:65:7e:b9 || name | || network_id | 44bb7c18-2ba6-49ab-b344-7d644bb3110f || security_groups | 92c4d34a-2b9c-4a85-b309-d3425214eca1 || status | DOWN || tenant_id | fae58cc4e36440b3aa9c9844e54f968d |+-----------------------+-------------------------------------------------------------------------------------+
Do the same with the 'ctlplane' network (IP <subnet>.42 was free there too..)
[stack@sc-instack ~]$ neutron port-create --fixed-ip ip_address=10.20.30.42 207a3108-e341-4360-b433-bfd6007cc59d (ctlplane)Created a new port:+-----------------------+-------------------------------------------------------------------------------------+| Field | Value |+-----------------------+-------------------------------------------------------------------------------------+| admin_state_up | True || allowed_address_pairs | || binding:host_id | || binding:profile | {} || binding:vif_details | {} || binding:vif_type | unbound || binding:vnic_type | normal || device_id | || device_owner | || fixed_ips | {"subnet_id": "7e1e052f-b4eb-4f3b-8b1c-ba298cbe530f", "ip_address": "10.153.20.42"} || id | 64ece3ea-5df4-4840-93ae-fcd844e8cc29 || mac_address | 52:54:00:91:45:b3 || name | || network_id | 207a3108-e341-4360-b433-bfd6007cc59d || security_groups | 92c4d34a-2b9c-4a85-b309-d3425214eca1 || status | DOWN || tenant_id | fae58cc4e36440b3aa9c9844e54f968d |+-----------------------+-------------------------------------------------------------------------------------+
4. Configure the reserved IPs on your Sensu server.
Of course, now that the IP's are reserved we could just enable DHCP on 'eth1' and 'eth2' but it would make the Sensu VM rely on the Cloud's DHCP infrastructure
so we will simply use static IPV4 addresses.
As usual, adapt for your network..
[admin@sensu01 ~]$ sudo su -[root@sensu01 admin]# nmcli con mod "System eth0" connection.id eth0[root@sensu01 admin]# nmcli con mod "System eth1" connection.id eth1[root@sensu01 admin]# nmcli con mod "System eth2" connection.id eth2[root@sensu01 admin]# nmcli con mod eth1 ipv4.addresses 10.1.30.42/24[root@sensu01 admin]# nmcli con mod eth1 ipv4.gateway 10.1.30.1[root@sensu01 admin]# nmcli con mod eth1 ipv4.method manual[root@sensu01 admin]# nmcli con up eth1[root@sensu01 admin]# nmcli con mod eth2 ipv4.addresses 10.20.30.42/24[root@sensu01 admin]# nmcli con mod eth2 ipv4.method manual[root@sensu01 admin]# nmcli con up eth2##or if you don't want to use NetworkManagercat /etc/sysconfig/network-scripts/ifcfg-eth1DEVICE=eth1BOOTPROTO=noneONBOOT=yesIPADDR=....NETMASK=255.255.255.0GATEWAY=....cat /etc/sysconfig/network-scripts/ifcfg-eth1.20DEVICE=eth1.20VLAN=yesBOOTPROTO=noneONBOOT=yesIPADDR=....NETMASK=255.255.255.0#thenifup eth1ifup eth1.20
5. Create a monitoring user on both the undercloud and overcloud
In order to perform checks against the Openstack API of the undercloud and of the overcloud, we'll need a tenant and a tenant in those two databases.
Note that I am using 'monitoring' and 'sensu/sensu'. Change the former as you see fit but remember these values as we'll need them during the ansible part.
[stack@sc-instack ~]$ . stackrc[stack@sc-instack ~]$ keystone tenant-create --name monitoring --enabled true --description 'Tenant used by the OSP monitoring framework'+-------------+---------------------------------------------+| Property | Value |+-------------+---------------------------------------------+| description | Tenant used by the OSP monitoring framework || enabled | True || id | cc95c4d9a9654c469b2b352895109c5d || name | monitoring |+-------------+---------------------------------------------+[stack@sc-instack ~]$ keystone user-create --name sensu --tenant monitoring --pass sensu --email vcojot@redhat.com --enabled true+----------+----------------------------------+| Property | Value |+----------+----------------------------------+| email | vcojot@redhat.com || enabled | True || id | 4cd0578ee84740538283de84940cd737 || name | sensu || tenantId | cc95c4d9a9654c469b2b352895109c5d || username | sensu |+----------+----------------------------------+[stack@sc-instack ~]$ . overcloudrc[stack@sc-instack ~]$ keystone tenant-create --name monitoring --enabled true --description 'Tenant used by the OSP monitoring framework'+-------------+---------------------------------------------+| Property | Value |+-------------+---------------------------------------------+| description | Tenant used by the OSP monitoring framework || enabled | True || id | 499b5edd1c724d37b4c6573ed15d9a85 || name | monitoring |+-------------+---------------------------------------------+[stack@sc-instack ~]$ keystone user-create --name sensu --tenant monitoring --pass sensu --email vcojot@redhat.com --enabled true+----------+----------------------------------+| Property | Value |+----------+----------------------------------+| email | vcojot@redhat.com || enabled | True || id | 6f8c07c1c8e045698eb31e2187e9fc59 || name | sensu || tenantId | 499b5edd1c724d37b4c6573ed15d9a85 || username | sensu |+----------+----------------------------------+
6. Run ansible
You can run ansible from the undercloud (or sensu), just make sure the key of the machine from where you run ansible is installed everywhere.
In our case, we use undercloud and user heat-admin because it's already present on most machines.
Obtain the playbook and adapt to your environment
1. Pull down the GIT repository on the sensu VM (or copy it from elsewhere)
[stack@sensu01 ~]$ mkdir mycloud[stack@sensu01 ~]$ cd mycloud[stack@sensu01 mycloud]$ git-c
clone https://github.com/ElCoyote27/ansible-sensu-for-openstackCloning into 'ansible-sensu-for-openstack'...remote: Counting objects: 473, done.remote: Compressing objects: 100% (451/451), done.remote: Total 473 (delta 242), reused 0 (delta 0)Receiving objects: 100% (473/473), 75.73 MiB | 1.70 MiB/s, done.Resolving deltas: 100% (242/242), done.Checking connectivity... done.
2. Install ansible version >2
[root@sensu01 ~]# easy_install pip[root@sensu01 ~]# pip install ansible
3. Create the inventory file and the playbook.
(Look inside the README.md within the ansible role and copy/paste). Adapt the IP's and credentials to your environment, of course.
If you followed the previous steps you can now use a small tool to generate your inventory.
This tools works by contacting the undercloud machine so it really requires a working network configuration.
It will build an inventory file with all of your hosts, including the IPMI IP addresses (as configured in Nova).
Redirect the scripts's output to a an inventory file..
[admin@sensu01 mycloud]$ ./ansible/ansible/tools/update_inventory.sh stack@10.20.30.1# Collecting information from Nova............Done![cmpt]sc-cmpt00 ansible_ssh_host=10.20.30.39 ansible_user=heat-admin ipmi_lan_addr=10.111.28.58sc-cmpt01 ansible_ssh_host=10.20.30.40 ansible_user=heat-admin ipmi_lan_addr=10.111.28.67sc-cmpt02 ansible_ssh_host=10.20.30.33 ansible_user=heat-admin ipmi_lan_addr=10.111.28.42sc-cmpt03 ansible_ssh_host=10.20.30.36 ansible_user=heat-admin ipmi_lan_addr=10.111.28.39sc-cmpt04 ansible_ssh_host=10.20.30.37 ansible_user=heat-admin ipmi_lan_addr=10.111.28.60[ctrl]sc-ctrl00 ansible_ssh_host=10.20.30.34 ansible_user=heat-admin ipmi_lan_addr=10.111.28.66sc-ctrl01 ansible_ssh_host=10.20.30.35 ansible_user=heat-admin ipmi_lan_addr=10.111.28.41sc-ctrl02 ansible_ssh_host=10.20.30.38 ansible_user=heat-admin ipmi_lan_addr=10.111.28.53[ceph]sc-ceph00 ansible_ssh_host=10.20.30.30 ansible_user=heat-admin ipmi_lan_addr=10.111.28.65sc-ceph01 ansible_ssh_host=10.20.30.31 ansible_user=heat-admin ipmi_lan_addr=10.111.28.61sc-ceph02 ansible_ssh_host=10.20.30.24 ansible_user=heat-admin ipmi_lan_addr=10.111.28.64sc-ceph03 ansible_ssh_host=10.20.30.26 ansible_user=heat-admin ipmi_lan_addr=10.111.28.63sc-ceph04 ansible_ssh_host=10.20.30.28 ansible_user=heat-admin ipmi_lan_addr=10.111.28.62[strg]sc-strg00 ansible_ssh_host=10.20.30.27 ansible_user=heat-admin ipmi_lan_addr=10.111.28.43sc-strg01 ansible_ssh_host=10.20.30.25 ansible_user=heat-admin ipmi_lan_addr=10.111.28.40sc-strg02 ansible_ssh_host=10.20.30.29 ansible_user=heat-admin ipmi_lan_addr=10.111.28.44[server]sensu01 ansible_ssh_host=10.1.30.42 ansible_user=admin[instack]instack ansible_ssh_host=10.20.30.1 ansible_user=stack[admin@sensu01 mycloud]$ ./ansible-sensu-for-openstack/tools/update_inventory.sh stack@10.20.30.1 > hosts
Next, create files in group_vars/ to customize the IP's, logins and password to match those found in your infrastructure.
You'll need the API URL's for your undercloud and overcloud. You can start from the .sample files:
[stack@sensu01 ansible-sensu-for-openstack]$ cat group_vars/all.sample > group_vars/all[stack@sensu01 ansible-sensu-for-openstack]$ cat group_vars/sensu_server.sample > group_vars/sensu_server
for example:
[stack@sensu01 ansible-sensu-for-openstack]$ cat group_vars/all # Put this in your playbook at group_vars/all #sensu_use_local_repo: false #sensu_use_upstream_version: false #sensu_api_ssl: false sensu_server_rabbitmq_hostname: "192.0.2.6" [stack@sensu01 ansible-sensu-for-openstack]$ cat group_vars/sensu_server #sensu_server_dashboard_user: uchiwa #sensu_server_dashboard_password: mypassword sensu_smtp_from: "sensu@company.com" sensu_smtp_to: "sensu@company.com" sensu_smtp_relay: "smtp.company.com" #sensu_handlers: # email: # type: pipe # command: "mail -S smtp={{ sensu_smtp_relay }} -s 'Sensu alert' -r {{ sensu_smtp_from }} {{ sensu_smtp_to }}" #over_os_username: sensu #over_os_password: sensu #over_os_tenant_name: monitoring over_os_auth_url: http://10.0.0.4:5000/v2.0 #under_os_username: sensu #under_os_password: sensu #under_os_tenant_name: monitoring under_os_auth_url: http://192.0.2.1:5000/v2.0
4. Execute the role with ansible to deploy sensu and uchiwa
When your config is ready, you will want to execute the playbook and check for errors (if any).
A good way to test if your host list is fine and if your SSH keys are imported is to run the following ansible CLI before launching the playbook itself:
admin@sensu01$ ansible -m ping -i hosts all
When ready, launch the playbook with the following CLI:
admin@sensu01$ ansible-playbook -i hosts playbook/sensu.yml
If all goes well, you should receive an output similar to those included below.
Most of the IP's & config settings can be overriden in either the playbook, the group_vars or by editing the <playbook_dir>/defaults/main.yml file.
Should your servers be unable to reach out to the Internet and/or contact CDN, it is possible to use 'sensu_use_local_repo: true' to install the local set of rpms provided with the GIT repo.
This should only be performed if you have valid RHEL and OSP subscriptions but cannot download software from the internet on your OSP nodes.
5. Sample outputs
Fig. 1 (Starting the playbook)
Fig. 2 (Playbook finished)
Verify proper deployment
Once the playbook has run successfully, you will be able to log into your uchiwa interface to check the current status of your OSP.
Known Issues
- Bug 1304860: sensu pkg in rhel-7-server-openstack-7.0-optools-rpms is too old (0.16.0) please update to 0.2x
Raw list of Sensu checks included with this playbook (To be completed).
The following is a work in progress which lists the checks that are currently implemented
Check Name | Implementation | Subscribers | Purpose |
ceph_health: | /usr/bin/sudo .../checks/oschecks-check_ceph_health | ceph | |
ceph_disk_free: | /usr/bin/sudo .../checks/oschecks-check_ceph_df | ceph | |
nova-compute: | sudo .../checks/oschecks-check_amqp nova-compute | cmpt | |
proc_nova_compute: | .../plugins/check_proc.sh nova-compute 1 100 | cmpt | |
proc_ceilometer-agent-compute: | .../plugins/check_proc.sh ceilometer-agent-compute 1 1 | cmpt | Looks for 1 (up to 100) nova-compute process(es) |
rabbitmq_status: | /usr/bin/sudo /usr/sbin/rabbitmqctl status | ctrl | Rabbitmqctl Status |
rabbitmq_cluster_status: | /usr/bin/sudo /usr/sbin/rabbitmqctl cluster_status | ctrl | Rabbitmqctl Cluster Status |
pacemaker_status: | /usr/bin/sudo /usr/sbin/crm_mon -s | ctrl | Pacemaker Cluster Status |
proc_keystone_all: | .../plugins/check_proc.sh keystone-all 3 100 | ctrl, instack | Looks for 3 (up to 100) keystone-all process(es) |
proc_httpd: | .../plugins/check_proc.sh httpd 3 100 | ctrl, instack | |
proc_mongod: | .../plugins/check_proc.sh mongod 1 1 | ctrl | |
proc_nova_api: | .../plugins/check_proc.sh nova-api 1 100 | ctrl, instack | |
proc_glance_api: | .../plugins/check_proc.sh glance-api 1 100 | ctrl, instack | |
proc_glance_registry: | .../plugins/check_proc.sh glance-registry 1 100 | ctrl, instack | |
proc_nova_conductor: | .../plugins/check_proc.sh nova-conductor 1 100 | ctrl, instack | |
proc_nova_consoleauth: | .../plugins/check_proc.sh nova-consoleauth 1 1 | ctrl, instack | |
proc_nova_novncproxy: | .../plugins/check_proc.sh nova-novncproxy 1 1 | ctrl | |
proc_neutron-server: | .../plugins/check_proc.sh neutron-server 1 100 | ctrl | |
proc_neutron-l3-agent: | .../plugins/check_proc.sh neutron-l3-agent 1 1 | ctrl | |
proc_neutron-dhcp-agent: | .../plugins/check_proc.sh neutron-dhcp-agent 1 1 | ctrl | |
proc_neutron-openvswitch-agent: | .../plugins/check_proc.sh neutron-openvswitch-agent 1 1 | ctrl | |
proc_neutron-metadata-agent: | .../plugins/check_proc.sh neutron-metadata-agent 1 100 | ctrl | |
proc_neutron-ns-metadata-proxy: | .../plugins/check_proc.sh neutron-ns-metadata-proxy 1 100 | ctrl | |
ceilometer-collector: | sudo .../checks/oschecks-check_amqp ceilometer-collector | ctrl, instack | |
ceilometer-agent-notification : | sudo .../checks/oschecks-check_amqp ceilometer-agent-notification | ctrl, instack | |
ceilometer-alarm-notifier : | sudo .../checks/oschecks-check_amqp ceilometer-alarm-notifier | ctrl, instack | |
cinder-scheduler : | sudo .../checks/oschecks-check_amqp cinder-scheduler | ctrl | |
nova-consoleauth : | sudo .../checks/oschecks-check_amqp nova-consoleauth | ctrl, instack | |
nova-conductor : | sudo .../checks/oschecks-check_amqp nova-conductor | ctrl, instack | |
nova-scheduler : | sudo .../checks/oschecks-check_amqp nova-scheduler | ctrl, instack | |
neutron-server : | sudo .../checks/oschecks-check_amqp neutron-server | ctrl, instack | |
neutron-l3-agent : | sudo .../checks/oschecks-check_amqp neutron-l3-agent | ctrl | |
neutron-lbaas-agent : | sudo .../checks/oschecks-check_amqp neutron-lbaas-agent | lbaas | |
neutron-dhcp-agent : | sudo .../checks/oschecks-check_amqp neutron-dhcp-agent | ctrl, instack | |
heat-engine : | sudo .../checks/oschecks-check_amqp heat-engine | ctrl, instack | |
heat_service_list: | /usr/bin/sudo /usr/bin/heat-manage service list | ctrl, instack | |
proc_chronyd: | .../plugins/check_proc.sh chronyd 1 1 | instack | |
over_ceilometer_api: | .../checks/oschecks-check_ceilometer_api <OS_ARGS> | openstack_over_api | |
over_cinder_volume: | .../checks/oschecks-check_cinder_volume <OS_ARGS> | openstack_over_api | |
over_glance_api: | .../checks/oschecks-check_glance_api <OS_ARGS> | openstack_over_api | |
over_glance_image_exists: | .../checks/oschecks-check_glance_image_exists <OS_ARGS> | openstack_over_api | |
over_glance_upload: | .../checks/oschecks-check_glance_upload <OS_ARGS> | openstack_over_api | |
over_keystone_api: | .../checks/oschecks-check_keystone_api <OS_ARGS> | openstack_over_api | |
over_neutron_api: | .../checks/oschecks-check_neutron_api <OS_ARGS> | openstack_over_api | |
over_neutron_floating_ip: | .../checks/oschecks-check_neutron_floating_ip <OS_ARGS> | openstack_over_api | |
over_nova_api: | .../checks/oschecks-check_nova_api <OS_ARGS> | openstack_over_api | |
over_nova_instance: | .../checks/oschecks-check_nova_instance <OS_ARGS> | openstack_over_api | |
instack_glance_api: | .../checks/oschecks-check_glance_api <OS_ARGS> | openstack_under_api | |
instack_glance_image_exists: | .../checks/oschecks-check_glance_image_exists <OS_ARGS> | openstack_under_api | |
instack_glance_upload: | .../checks/oschecks-check_glance_upload <OS_ARGS> | openstack_under_api | |
instack_keystone_api: | .../checks/oschecks-check_keystone_api <OS_ARGS> | openstack_under_api | |
instack_nova_api: | .../checks/oschecks-check_nova_api <OS_ARGS> | openstack_under_api | |
proc_ntpd: | .../plugins/check_proc.sh ntpd 1 1 | osp_generic | |
proc_xinetd: | .../plugins/check_proc.sh xinetd 1 1 | osp_generic | |
proc_ntpd: | .../plugins/check_proc.sh ntpd 1 1 | overcld_generic | |
proc_xinetd: | .../plugins/check_proc.sh xinetd 1 1 | overcld_generic | |
proc_redis-server: | .../plugins/check_proc.sh redis-server 1 1 | server,ctrl | |
proc_rabbitmq: | .../plugins/check_proc.sh beam.smp 1 1 | server | |
sensu_api: | .../plugins/check_proc.sh sensu-api 1 1 | server | |
sensu_server: | .../plugins/check_proc.sh sensu-server 1 1 | server | |
proc_swift-object-server: | .../plugins/check_proc.sh swift-object-server 1 2 | strg | |
proc_swift-account-server: | .../plugins/check_proc.sh swift-account-server 1 2 | strg | |
proc_swift-container-server: | .../plugins/check_proc.sh swift-container-server 1 2 | strg | |
proc_swift-object-replicator: | .../plugins/check_proc.sh swift-object-replicator 1 2 | strg | |
proc_swift-account-replicator: | .../plugins/check_proc.sh swift-account-replicator 1 2 | strg | |
proc_swift-container-replicator: | .../plugins/check_proc.sh swift-container-replicator 1 2 | strg | |
proc_swift-object-auditor: | .../plugins/check_proc.sh swift-object-auditor 1 3 | strg | |
proc_swift-account-auditor: | .../plugins/check_proc.sh swift-account-auditor 1 2 | strg | |
proc_swift-container-auditor: | .../plugins/check_proc.sh swift-container-auditor 1 2 | strg | |
LSI_PERC_status: | sudo .../plugins/megaclisas-status --nagios | system | |
linux_bonding: | .../plugins/check_linux_bonding | system | |
sensu_client: | .../plugins/check_proc.sh sensu-client 1 5 | system | |
system_file_descriptors: | .../plugins/check_open_fds | system | |
system_CPU: | .../plugins/check_cpu.sh | system | |
system_memory: | .../plugins/check_mem.sh | system | Checks systems memory usage |
system_FS_root: | .../plugins/check_disk.sh -c 90 -w 80 -d / | system | Checks root FS available space |
system_FS_root_inodes: | .../plugins/check_disk_inodes -w 80 -c 90 -p / | system | Checks root FS available inodes |
proc_crond: | .../plugins/check_proc.sh crond 1 1 | system | Looks for 1 crond process |
proc_systemd: | .../plugins/check_proc.sh systemd 1 100 | system | Looks for 1 systemd process |
proc_sshd: | .../plugins/check_proc.sh sshd 1 100 | system | Looks for 1 (up to 100) sshd process(es) |
The WIP document listing all of the checks is found on Google Docs [9]
Related links
[2] N/A
[3] N/A
[4] N/A
[6] N/A
[7] N/A
[8] N/A
[9] N/A
Comments
Post a Comment