28 October 2015
IP is a technological success story. Starting life as a connectionless datagram service in the 1970s, since coupling with TCP it has become the predominant way of relaying data packets between network hosts.
It has also proved its worth as a communications protocol in LANs, and as such has served as a unifying technology where computer resources want to connect. In this capacity, IP has enabled the various proprietary and vendor-specific comms protocols, hitherto found on corporate
IT systems, to be superseded.
With mass-adoption, IP brought continuity and its openness enabled interoperability both inside organisations as well as out into the wide area. Alternative, and sometimes contending, communications protocols have been largely (but not entirely) marginalised. Thanks to IP, network managers had fewer thorny integration projects to bang their heads against and were able to turn their attention to more innovative IT delivery programmes.
Little wonder then, that IP’s status in the networking stack hierarchy is assured.
“As an enterprise LAN communications protocol, IP is now king,” proclaims Leon Adato, network management ‘head geek’ at network tools vendor SolarWinds. “From now on only a few – very few – new specialist applications will be non-IP-compliant. I do not see that [older] applications that are not fully-integrated into all-IP networks – and therefore cannot be controlled and optimised by common management tools – will be maintained indefinitely.”
Keeping it in-house
Although IP may have resolved many of the compatibility issues of the past, the advantages it holds are giving rise to a new set of problematic issues for those tasked with running enterprise networks. IP may be ‘just’ a communications protocol, but it is one that’s arguably both a symptom and a cause of complex emerging challenges for the delivery of enterprise IT.
Most evident among these symptoms is the volume growth rate in networked traffic and, more specifically, traffic across data networks generated by commercial activity. The most recent edition of Cisco’s Visual Networking Index (VNI) forecasts that global business IP traffic will grow by a CAGR of 20 per cent over the next four years, totalling some 29.9EB per month by 2019 (also see News, Jun 2015).
Furthermore, business traffic will increasingly include a wide range of corporate application types – from high-end CAD/CAM and collaboration packages, to VoIP, unified communications and video-conferencing. The VNI predicts that internet-borne video traffic will account for 65 per cent of business internet traffic before the end of this decade – a rise of about 40 per cent on current proportions.
It is the combined impacts of volume increases in business IP traffic and the enhanced complexity of traffic types contained within those volumes that are the most quantifiable of the new IP challenges. While network management platforms have been around for as long as networks themselves, the scale and pace of change now confronting this critical function calls for some fresh thinking around how they are put to use.
From a technological, budgetary and human resources perspective, managing an enterprise IT estate is a demanding job. Of course, hard-pressed IT directors have other options, and given the welter of advocacy pushing the benefits of cloud-based services and other forms of ICT outsourcing, it might seem counter-rational for bigger organisations to stick to the traditional in-house IT function ethos. But to their way of thinking, challenges such as: legacy system migration; network access provisioning; equipment upgrade cycles; security; implementing tailored management regimens; and balancing
the shifting dynamics of the relationship between IT personnel and internal ‘consumers’, are best handled in-house.
There remain many non-IP-native legacy systems chugging away in the world’s computer rooms and data centres. These systems continue to serve as a platform for critical or business support applications, and as such also continue to constitute another technological progress challenge for those running IP networks.
Some legacy systems perform the tasks that they were designed to do when they were first fired-up back in the day, and may also still receive ongoing support from the vendors who supplied them. Techniques such as encapsulation let them transport the data they send and receive over IP.
But the technological issues they present are most often ones of system visibility and manageability. Modern IP network management tools aim to provide a technologically holistic view of all network assets and resources, and this is especially important for the most current resource management ideologies, such as service-based/software-defined networking.
Some workarounds have proved effective in extending the inter-operational life-expectancy of many an ageing legacy platform, but these were not intended to be kept going indefinitely.
Organisations have also been prioritising new ICT projects over migration, if outgoing and incoming systems can be made to co-function to an acceptable operational level. In these straitened economic times, many companies are holding onto legacy systems in order to ‘sweat’ their IT assets.
However, the sense of doing all this is undermined when the shortcomings of legacy systems are shown up by the advanced capabilities of IP-based infrastructure, which include both opex efficiencies from lower running costs, and inbuilt reporting facilities able to reveal an application’s contribution to business profitability to inquisitive directors.
One under-recognised inhibitor to legacy-to-IP migration that IT teams should beware of is the belief that such projects must be undertaken on a ‘one-to-one basis’. This is when user specifications for a new IP-based application seek to copy the legacy system in almost every feature.
The fact that many of those old-system features are never used and have been superseded by innovation is usually overlooked. IT managers may be anxious to cover themselves ‘just in case’, so time and money goes into recreating unnecessary functionality.
In this context, facilitating an open and informed approach to change when moving between old and new systems is another challenge that comes with a move to IP.
Advances in the on-premises capabilities of Wi-Fi over floor-to-floor cable infrastructures could increasingly come to represent another challenging aspect of the overall shift to IP-based networking.
Consideration of a WLAN as a viable alternative to a fixed LAN needs to balance several contending factors, and pointers in favour of on-site Wi-Fi as a money-saver and deliverer of flexibility should be tested against scalability and expectations in regard to utilisation patterns (also see feature ‘Fixed or mobile in the unified comms world?’, Apr 2015).
Enterprise-wide Wi-Fi deployments may be technically feasible, but have to be assured against total costs; and there’s no point in paying to extend a hotspot into a part of the building where no employees go to work.
That said, driven by the wireless-only access capabilities of most laptops, tablets and smartphones, organisations of all sizes are shifting rapidly to wireless-only, according to Ronan Kelly, CTO EMEA/APAC at data/telecoms product vendor ADTRAN: “Wi-Fi is transitioning from a network of convenience to being mission-critical for businesses – and this transition is happening quickly.”
Kelly goes on to warn that this should not be regarded simply as a case of one technology out-evolving another.
“It has huge implications for businesses that have neither interest nor skills [in] running Wi-Fi networks. Because unlike their conventional wired network, the Wi-Fi network is subject to performance degradation, and also to security breaches from outside of their building premises if it is not managed to the same level of competence.”
Alcatel-Lucent Enterprise agrees Wi-Fi is becoming more ubiquitous in offices. But Peter Tebbutt, the vendor’s general manager for UK and Ireland, adds: “Introducing an in-building Wi-Fi network co-existent with wired LANs is a fundamental change, because you cannot really change one part of the IP network in isolation – it affects the end-to-end network in a different way.”
Knock-on effects may not be immediately apparent. One example could be the way some users with company laptops switch to using visitor Wi-Fi as their primary point of access, and disengage their computers from the fixed LAN altogether. This could be because the nature of their job has changed and they prefer to work away from their assigned desk space, or perhaps they think that the visitor Wi-Fi is ‘faster’. As a result, network administrators ‘see’ fewer users plugged-in to the fixed LAN, while Wi-Fi access demand creeps up.
However, as Tebbutt suggests, network engineers should be mindful of Wi-Fi “honeymoon periods”. If on-premise Wi-Fi utilisation increases per-head, then fresh bandwidth contention issues could crop up that will reshape users’ performance expectations, especially if the reason why they are accessing the Wi-Fi network away from their desks is to participate in an online conferencing session. “As LAN video explodes, you will then have to ask if your local Wi-Fi is up to the job,” advises Tebbutt.
According to Pascal Tangaprégassam, product manager at network and IT planning software vendor InfoVista, such video services over IP raise new challenges for network managers, and their technical competences will be thoroughly tested: “The network team has to face a new situation with video/IP. They cannot act as bottlenecks, preventing the use of this new form of communication and collaboration between users, but rather as enablers of new video-based services with all the challenges.”
With video over Wi-Fi it is too early to assess what affect the forthcoming 802.11ac specification (aka ‘Gigabit Wi-Fi’) operating at 433Mbps-500Mbps (single link) and 1Gbps-1.3Gbps (multi-station) in the 5GHz frequency range will have. But installing more capacious Wi-Fi is of course another capex item to be budgeted for.
All that may also lead a move to an all-wireless LAN – and that will bring challenges that are in many respects unforeseeable and therefore partially unmanageable (in the initial stages at least). Some say wireless is unpredictable compared to more prevalent wired infrastructure, and network managers will therefore need new and perhaps quite differently designed tools to perform tasks like troubleshooting.
Furthermore, a strategic transition to in-building Wi-Fi might not necessarily mean a reduction in the number of installed wired IP network ports to be managed by the IT function. These ports may be claimed as APs for Wi-Fi propagation, or by IP-manageable workplace equipment such as printer-photocopiers, CCTV cameras, even vending machines.
Sprinkling QoS about like “fairy dust”
This leads to another complexity arising from all-embracing IP: that of endpoint device management. Previously, networks just had to deal largely with stationary devices – desk PCs and docked laptops. But now, endpoint device management must also embrace mobiles, laptops, tablets – maybe even smartwatches.
And then, of course, there are applications in the evolving Internet of Things. SolarWinds believes that this will force a fundamental rethink on how IP is managed. Adato suggests that the crunch point will come in the vexed area of quality-of-service. He reckons that the QoS concept is abused by management execs who have superficial comprehension of the factors governing overall network performance delivery.
“They sprinkle QoS about like fairy dust. It will be a non-issue until the point where [network overload] causes a QoS dip that results in, say, a lost trading opportunity and then all hell breaks loose, even if network managers had actually forewarned of this possibility.”
Adato adds that with board-level directors complaining and users raising tickets for what they assume are hardware or software problems, network engineers will find themselves “hammered at both ends”.
One approach to regaining control of IP network resources is to adopt a policy-based management strategy. As a generic solution, this aims to enable the administration of enterprise networks via the activation of a policy set that effectively mandates usage access rules and, for instance, the allocation of IT resources.
The useful feature of policy-based management is that the behaviour of managed network devices can change dynamically within the thresholds of the set rules without having to keep re-adjusting the device directly or having to disengage devices to make changes. Users or groups can have tailored policies applied to their profiles, and these determine what functions (available QoS thresholds, bandwidth, security, etc.) they can and cannot access over a given schedule. The aim is to ensure that users are assigned network resources appropriate to their agreed needs.
Video and UC
The key characteristic of video is that it is latency-sensitive as well as fairly unpredictable over the course of a day. As a result, InfoVista believes policy-based network management is vital where video-over-IP is a supported application. Tangaprégassam says: “Much like regular IP phone calls, video-over-IP network flows can happen at any time and last from a few seconds to a few hours, such as for a video-conference of board meetings, streaming of online training, and so on.”
He goes on to point out that video-over-IP imposes a new governance of the network as enterprise users step up from the received notion of ‘scheduled’ video interactions, and start making video calls (and leaving video messages) on the same basis as most voice calls and voicemails are made.
Video-over-IP has emerged as the most popular delivery method for a range of organisations looking to upgrade their AV communication services. For instance, it enables organisations to provide access to TV news channels to inform corporate decision-making, and enables staff and visitors to receive video content on fixed screens and mobile devices. At the same time, technology managers are implementing IP/video delivery systems for workforce training, and to enhance staff communications.
Video applications will become more popular with more ubiquitous standardisation on IP inside and outside the enterprise infrastructure, helping to resolve some of the latency glitches that used to cause frustrations.
“Because we are seeing a lot more IP in the WAN – thereby creating an all-IP end-to-end network – latency is minimised, with no data conversions needed,” says Clive Longbottom, key analyst at Quocirca. “This allows far more software-defined actions to be put in place. The software-defined, hardware-assisted, IP-only network will be the way to go, all driven by policy at the software layer.”
The addition of unified communications platforms to existing IP infrastructures presents another case-in-point. As companies rollout UC projects, they will encounter a number of pitfalls if they do not fully understand the composition and volume of traffic on their networks, warns Matt Goldberg, VP global strategic solutions at network monitoring specialist SevOne.
“Look at key components of the UC delivery path such as interface utilisation and QoS queues. Network and operations teams can leverage this gathered data through metrics, flows and logs, to provide a holistic view into behaviour on the network.”
These three methods of what Goldberg describes as “data ingestion” in turn provide insight into the infrastructure, and enable the network management team to further ensure that the infrastructure is able to handle the additional traffic load from the UC rollout.
“As UC is typically introduced in stages for global organisations, these methods also provide additional insight into how the infrastructure is performing as each location is brought online,” he says. “It also solves visibility issues, notifying the user of any issues in the service once it’s fully integrated.”
Policies and people
Alcatel-Lucent’s Tebbutt predicts that a policy-based approach to IP network management will become a fundamental practice in vertical sectors such as hospitals and other institutions where designated users must have assured network resources at their disposal pretty much at all times.
He adds that HR departments must also have a place in this debate. CIOs must liaise with personnel directors over the potentially prickly question of what users are and aren’t permitted to do. “There’s a human challenge here [for IT] because they must set policy and enforce it, but at the same time not be barriers and bottlenecks.”
The transition to all-IP could therefore bring a fairly profound change to the general relationship between those engineering the network and those who rely on it for their daily needs. The setting of users’ requirements could change from being a periodical assessment exercise to an evaluation that takes place on a weekly basis, where provisioning to different parts of the enterprise could be agreed on the basis of differing need, not as an open-ended resource.
While this should result in a more responsive and focused service for end-users, the new regimen may also call them to develop a better understanding of how they ‘consume’ IT services. Ultimately, they could end up managing their own networks and take on more responsibility for planning their future needs.