Skip to content

Instantly share code, notes, and snippets.

@unbelauscht
Last active May 11, 2020 19:54
Show Gist options
  • Save unbelauscht/ac5e175f7d7dd9778625520681822697 to your computer and use it in GitHub Desktop.
Save unbelauscht/ac5e175f7d7dd9778625520681822697 to your computer and use it in GitHub Desktop.
Parser to print the OVH status visual monitoring status
#!/usr/bin/env python3
# Example output:
# ###############
#
# GRA1: LV1 -> G131B11:
# 1 server down 15min 0sec ago
# ERI1: LV1 -> E101D14:
# 1 server down 15min 0sec ago
# WAW1: LV1 -> W16A12:
# 1 server down 8min 44sec ago
# BHS5: LV1 -> T05A03:
# 1 server down 15min 0sec ago
# BHS7: LV1 -> B713B01:
# 1 server down 15min 0sec ago
from bs4 import BeautifulSoup
import requests
import re
datacenters = ['gra1', 'gra2', 'rbx', 'rbx2', 'rbx3', 'rbx4', 'rbx5', 'rbx6', 'rbx7', 'sbg1', 'sbg2', 'sbg3', 'sbg4', 'eri1', 'waw1', 'lim1', 'bhs1', 'bhs2', 'bhs3', 'bhs4', 'bhs5', 'bhs6', 'bhs7', 'syd1', 'sgp1']
levels = ['lv1', 'lv2', 'lv3', 'lv4', 'lv5', 'lv6']
rackPattern = '[A-Z]?[0-9]+[A-Z][0-9]+'
for dc in datacenters:
r = requests.get('http://status.ovh.com/vms/index_%s.html' % (dc))
data = r.text
soup = BeautifulSoup(data, features="lxml")
for lv in levels:
for rack in soup.find_all('td', lv, 'data-original-title'):
match = re.match(rackPattern, rack.get_text())
if match:
# parse raw html from tag attribute 'data-content'
content_soup = BeautifulSoup(rack.a.get('data-content'), features="lxml")
# print found information
print("%s: %s -> %s:\n\t%s" % ( dc.upper(), lv.upper(), rack.get_text(), content_soup.ul.li.get_text()))
@lucasRolff
Copy link

The rackPattern is incorrect, and won't ever match anything from RBX-1 (rbx), since it doesn't start with a letter in RBX-1, an easy fix is to do:

rackPattern = '[A-Z]?[0-9]+[A-Z][0-9]+'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment