I can’t give an authorative answer (not my domain), but I think there are two ways these types of things are done.
First is just observing the page or service as an external entity; basically requesting a page or hitting an endpoint, and just tracking whether you get a response (if not, it must be down), or for measuring load level in a very naive way, track the response time. This is easy in the sense that you need no special access to the target. But it’s also limited in its accuracy.
Second way, like what your github example is doing, is having access to special api endpoints for (or direct access to) performance metrics. Since the github status page is literally ran by Github, they obviously have easy access to any metric they could want. They probably (certainly) run services whose entire job is to produce reliable data for their status page.
The minute details of each of these options is pretty open ended; many ways to do it.
I can’t give an authorative answer (not my domain), but I think there are two ways these types of things are done.
First is just observing the page or service as an external entity; basically requesting a page or hitting an endpoint, and just tracking whether you get a response (if not, it must be down), or for measuring load level in a very naive way, track the response time. This is easy in the sense that you need no special access to the target. But it’s also limited in its accuracy.
Second way, like what your github example is doing, is having access to special api endpoints for (or direct access to) performance metrics. Since the github status page is literally ran by Github, they obviously have easy access to any metric they could want. They probably (certainly) run services whose entire job is to produce reliable data for their status page.
The minute details of each of these options is pretty open ended; many ways to do it.
Just my 5¢ as a non-web developer.