Monitoring from the cloud: the dos and don'ts for managers
Small and medium-sized businesses looking to monitor their IT systems should consider cloud as an option
Monitoring IT systems sounds like a pretty simple task and generally within a small organisation it is.
As the business grows, however, company managers have to start thinking about it considerably more – and more importantly once they've thought about it, they actually need to do it.
And this is more easily said than done. As we all know, the IS department is often only really noticed when something breaks and the users have a problem. Within the IS department, monitoring is often only really noticed when something goes wrong and the users scream before the systems team see it.
Monitoring software can be expensive, and when the IT manager put that requisition under the finance director's nose he or she will often ask why it’s needed it at all – after all, the company has already spent a heap of money on resilient WAN links, routers that are configured with HSRP for auto-failover, fleets of servers and databases in HA clusters, so even if something blows up the services will keep working.
And with many monitoring packages the per-node licence cost can be quite high, unless there are large numbers of ports, so an SMB could find itself paying a premium for just a few hundred switch ports and a few dozen servers.
Monitoring, then, is a superb candidate for outsourcing as a managed cloud service. There are a few companies who offer such a service, including Cable & Wireless and CSC.
The usual concerns apply when outsourcing monitoring - link uptime and the security of the link. The extent to which the company takes the concept depends on the magnitude of the installation; in a small setup, some kind of SSL-based VPN is best, while in a larger set-up a permanent IPSec tunnel or even a point-to-point link might be a more sensible option.
Whichever approach is taken, consider the security of the connection. If the access is SSL or IPSec VPN the chances are that the company’s end of the connection will land on a firewall of some kind, or at the very least a router that's capable of a decent level of security measures.
Managers should not just be concerned with security, they should consider robustness too
If a point-to-point approach is taken then it’s important that a similar approach is followed ie land the link on a security-capable router, and if that’s not possible, it should be landed outside a firewall and, and as above, linked to router, while following a sensible security policy.
Managers should not just be concerned with the security of the link, they should consider the robustness of it too. If they’re using an outsourced monitoring service, they probably care what happens if the external service can't connect to the systems it's trying to monitor. No IT manager wants the monitoring company to get his or her team out of bed at 3am because its Internet connection went down and all the systems turned red on the monitoring screen.
The safest way is to monitor the monitoring system. Managers should look to get some kind of independent link (a small router with 3G capability like DrayTek for example) and set up a system internally to monitor the monitoring company's systems over the normal link between you and them. It should then be configured to send alerts via the independent link, so that in the event that monitoring dies, managers know that it has keeled over before the phone starts ringing,
As part of this process, the system should be set up so it can also be used to connect to the management network. Companies should then add an additional step to their standard procedures, allowing teams to use this link to double-check the monitoring system's status. This lets the manager know what’s happening in the event of a major alarm and maybe helps prevent him from sending in the cavalry.
Basic data gathering
The first layer of monitoring that’s needed is basic system and service uptime. The first place for a manager to start is with ICMP (ping) so he can make sure each device is actually responding, and then layer SNMP (version 2 or later, and preferably version 3 with some sensible encryption) on top of this to verify interface status, error counts on key ports, CPU loading, disk usage and the like.
In an outsourced setup, there won’t be a need to do all this stuff over the link to the monitoring company? However, it would be a good idea to have some internal monitoring devices that send their monitoring packets over the LAN, gather the results, then report the aggregated summary back to “mother”. Of course, companies would need to run consistent communication between the two ends, but with a system inside the company network monitoring services and sending summary data and alert “traps” only as required, they would be able to make the best use of the link.
There is one potential problem here because the provider will be placing one of its systems inside the company network. This really isn't an issue, though, because not only can you whack a firewall between it and your network (who says both sides of a firewall can't be on the LAN?) but one would hope that the management LAN is separate, either physically or at least by VLAN separation, from the production data network and leakage is a minimal risk.
Although it’s possible to keep the traffic levels down by concentrating the majority of monitoring traffic on an internal system, there will usually be some kind of need for the service provider's systems to interrogate devices from afar. This is fine, and it's a simple case of designing the security model so that the monitoring systems are able to connect to the right systems on the right ports – and only to the right systems on the right ports.