29 July 2015
Migration to the cloud, virtualisation, BYOD, a growing mobile workforce, SDN – there is no doubt that the face of networking in the corporate world has changed irrevocably in recent years. And there’s yet more to come when you start factoring in the upcoming Internet of Things, dealing with our insatiable appetite for data, and the ever present threat of increasingly sophisticated cyber attacks. All of which not only adds up to greater complexities for network managers, but also means that the need for continually monitoring LANs and WANs has never been more crucial.
“Modern life seems to be increasingly focused on immediate access to information or the obtaining/sharing of data intensive content,” says Geoff Kempster, technical manager at test equipment specialist Microlease. “This is placing huge pressures on both LAN and WAN infrastructure to deliver elevated levels of bandwidth.”
As a result of all the factors outlined above, Kempster says owners of LANs and WANs need to continually monitor and upgrade the performance of their networks, as well as ensuring that they are functioning at maximum efficiency. “This has to be a whole life process, from the initial installation of the network all the way through its operational lifespan.”
That last piece of advice sounds expensive and is likely to turn most of the corporate bean counters into gibbering wrecks. So can’t today’s networks continue to be tested and monitored using the existing tools that the IT department invested in years ago?
Perhaps. For instance, Richard Clothier, marketing manager at network performance solutions provider Phoenix Datacom, says traditional network test tools still have a part to play when building and troubleshooting LANs, WLANs and WANS.
“You will always need to verify new (and existing) copper and fibre optic cabling, Wi-Fi coverage, switches, routers and, in some cases, connectivity from your service provider. However, reliance on the network is heavier now than at any previous point in time because enterprises are always seeking to improve their capabilities and staff productivity with new services and applications.”
He goes on to say that because the network needs to step up to the challenge of delivering securely, consistently and at all times, the tools and systems used to test and validate them have to keep up with the associated demands.
Brad Reinboldt, solution manager at Viavi Solutions (formerly JDSU), agrees here. He believes network managers can continue to utilise passive monitoring of network traffic as they have done in the past, but points out that they will need to support higher traffic levels and emerging traffic types.
“With WAN loads becoming ever more critical, they also need to more effectively verify transport layer health, ideally with active testing that can alert on issues before they impact the application layer.
“It’s worth highlighting the importance of ongoing validation of WAN performance in particular, as it relates to quantifying network delay. This is critical because WAN delay is often a function of carrier congestion and any material variations – especially slower – can have drastic implications for application and service health.”
More about how carriers can impact enterprise WAN testing later. But for now, as corporate networks become bigger, faster and more complex, the demands placed on them mean they are now significantly different to what they were just a decade ago.
“Perhaps ten years ago it was legitimate enough to rely on traditional load testing tools,” says Jeff Curley, business solutions director at infrastructure monitoring specialist Aurora365. “Now, testing demands much more than simply validating the performance of switches and routers, as the requirement is to test how applications are performing across organisational boundaries.”
IT management software specialist SolarWinds echoes this and says it’s what we’re doing on the network rather than the network itself that is dictating what needs to be tested.
For example, Don Jacob, the company’s “head geek”, says video usage has rocketed, BYOD means workers could be using up to three personal devices (laptop, mobile and tablet) on the corporate network, remote connections have increased, and malware has become smarter.
“If the network admin is managing a network that is experiencing this exponential growth of data volume, speed and new technology adoption, then it is also the time to invest in new hardware and software capable of handling the requirements of the enterprise,” he says.
For instance, Jacob says test equipment that was used years ago for security analysis based on signature worms will not be enough to evaluate a security breach that has occurred today using non-signature attack.
So what should network managers look out for when testing, and how should they go about it given all the challenges?
The cost of testing
“First of all, effort must be made to ensure that the physical installation is fully compliant with the stipulated requirements,” advises Kempster. “This may involve the physical testing of LANs to certify their conformance to the relevant Cat 6 or Cat 7 standards. Alternatively, it could be verifying the characteristics of a fibre in a WAN, which may include dispersion testing and thorough optical time domain reflectometer measurements.”
Kempster continues by saying that once the physical environment of the network has been verified, data performance can then be tested: “This could be a simple RFC-2544 test. But more companies are looking to use the newer, highly sophisticated Y.1534 test processes, which are designed to enable testing of the network that is more applicable in real world scenarios.”
Y.1564 is Ethernet service activation test methodology developed by the International Telecommunication Union. But whichever method is adopted, all of it will be for nought if the network isn’t continually monitored and tested on a regular basis, particularly as it evolves over time.
And for the gibbering wrecks in the CFO’s department, Kempster has this advice: “The company should, with help from its test equipment partner, be able to decide which sourcing option is most appropriate from a financial point of view – whether rental, purchase, new, used, or even utilising divide-by programmes – and be able to keep total flexibility.”
The cost of testing is clearly going to be a hurdle that needs to be overcome in these cash-strapped times. But as Phoenix Datacom points out, not investing in testing could prove to be a heftier price to pay. For instance, it says as more enterprises turn to cloud-based services, not all of them will have the budget to simultaneously invest in application performance monitoring solutions. But without these, how will they know if their new investment will deliver the performance and capability gains required to justify the outlay?
“The systems and applications that are critical to the enterprise can only be as good as the network used to deliver them,” says Clothier. “If you had to choose between knowing the new service/application would work properly (i.e. delivering value), and the uncertainty of its performance once rolled out across the company, which would you choose? It’s a no-brainer.
“The cost for such erudition is less than you may think and micro in size when you factor in the cost of the new service or application – and indeed the price of potential downtime, failure and staff twiddling their fingers.”
As a result, Clothier says Phoenix Datacom is seeing increases in the number of firms who want to stress and test their network fabrics to a far greater extent than at any previous point in time.
“We are seeing particular growth in demand for our professional test/validation services and our Xena Networks range of Ethernet and TCP traffic generation equipment. These include the Ixia PerfectStorm ONE platform for generating high volumes of stateful (application layer) traffic and malware, and our PacketStorm solutions for replicating the unfavourable conditions and impairments of IP networks. [The latter] shows how services such as VoIP and video will work when used over varying types of connection, from fixed Ethernet in the office to a remote user connecting via VPN over a low bandwidth DSL, cellular or satellite link.”
SolarWinds continues with the cost theme and reckons it should be the first consideration for an organisation. Furthermore, it should also look at whether it can afford to invest space and manpower not only for the test equipment needed but the test lab that has to be created. Jacob says when choosing equipment for this lab, network administrators should try to design a test area that can imitate the actual network as much as possible while also taking into account cost constraints.
“They should also keep in mind whether the hardware and software can scale for the future and has the capabilities to be reprogrammed to support future requirements. Further, look for test equipment that can provide analytics or actionable intelligence, and not just statistics from running a test. This will help design and scale up the actual network to meet zero downtime.
“Test equipment should not be chosen based on what the organisation has today, but should be based on what it will be in the next five years, while keeping in mind cost considerations and scalability of the equipment to cater to future demands of speed and volume.”
Jacob reckons it might be feasible for some organisations to consider outsourcing the test lab to a network testing vendor. Such a vendor could then handle the enterprise’s requirements as and when needed, and help scale up when the need to test new technology comes up. “This way, the enterprise will not have to invest capex in new hardware and software when the network evolves,” he says.
The latter certainly comes across as sound advice given the pace at which network technology has evolved over the last few years and is expected to do so in the foreseeable future. So what are the features and functions that test and monitoring equipment will need to incorporate in the coming years?
The test of time
According to Viavi, test equipment vendors will need to continue to develop products that address the latest trends and technology challenges with distinct features. “Manufacturers need to ensure their solutions maintain operational visibility for the network manager with these new challenges by enhancing their form, features, and functions,” says Reinboldt.
“Examples here include proactively verifying underlying transport layer status – a must for consistent WAN performance. Increasing packet capture rates to keep up with the traffic deluge, and the ability to de-encapsulate tunnelled traffic will also be important when troubleshooting is required.”
Security is another key feature, and he adds that test equipment capable of passively capturing all network traffic can serve as a backstop to other IT security initiatives.
As more services are outsourced, Reinboldt also advises enterprises to keep in mind the concept of collaborating with carrier and service providers: “Moving from a model that was often more adversarial to one of cooperation can maximise the business value of all stakeholders.”
He says there are several examples here, such as: “Enterprises could work with a service provider on WAN performance and optimisation by deploying TrueSpeed VNF [Viavi’s standards-based network performance tool] in the cloud provider’s infrastructure. This allows for rapid detection of WAN issues before they reach a level that impacts users.”
Phoenix Datacom’s Clothier says service providers are usually early adopters for new test and measurement technologies and processes because networks are their “bread and butter”.
“Until recently, Wi-Fi testing and planning solutions have provided a heat map view of Wi-Fi coverage and signal strength. But in response to the growing need to improve staff productivity, we can now empower customers to make key decisions based on how services and applications will work in all areas of the building over the Wi-Fi network.”
As an example, Clothier says domestic installation engineers from service providers can now place probes throughout a house and quickly show consumers on a tablet how applications such as Skype and Netflix will work in each room. He says this capability has reduced truck rolls and customer complaints, and has also provided opportunities to upsell more access points and repeaters.
“The solution service providers are using for this is AirScout from GreenLee Communications. As GreenLee’s partner in the UK we are now receiving much interest from enterprises moving premises, adding new Wi-Fi access points, and of course adopting new services and business applications. This arms them with the valuable intelligence required to make basic but very important decisions over the number and location of APs.”
Microlease agrees that as carrier/service providers improve their networks, more specialist test equipment will be needed. For example, Kempster says the integration of ROADMs (reconfigurable optical add-drop multiplexers) into networks is helping to improve efficiency levels as it removes the need for electro-optical conversion activity. This heightens the capacity of deployed fibres, allowing far greater levels of data to be transported.
“With the increasing use of ROADMs to improve the performance of networks, more specialist test equipment is required in order to measure the performance of these networks. State-of-the-art optical spectrum analysers and power meters are designed to look at these systems.
“Also, the widespread deployment of LTE networks has meant that synchronisation of network infrastructure is even more critical now than ever before. Even small errors can result in dropped calls and failed handovers which will frustrate subscribers.
“Therefore, the use of packet-based timing technology such as Synchronous Ethernet and IEEE 1588v2 Precision Time Protocol is becoming commonplace, and more in-depth testing of networks for stability is now a high priority.”
Jacob’s advice to test equipment makers is to scale up their kit so that it can be used with higher volumes of data at a higher speed: “Traffic generators should be able to generate more volumes, and protocol analysers should have the capability to handle and process this high-speed data even more quickly.”
He adds that vendors should also consider building functionalities into their equipment that allow users to test based on their organisations’ needs. For example, these could include metrics specific to voice and video performance, desktop and server virtualisation, etc., or higher speed and volume considerations when it comes to testing cabling and connectors.
Ultimately, Jacob believes vendors should come up with test equipment that has the capability to provide actionable and intelligent data that can be used to improve existing network designs.
“The major focus for networks is performance and scalability which is also what the network equipment should be capable of. The test equipment of the future should be able help customers make informed decisions that will help them with the upgrading of their existing network infrastructure, as well as what should be checked for when adopting new technologies.