Serving new services

27 November 2015

Enterprise applications are evolving beyond simple client-server interactions to include far more compute-intensive services. According to KEMP, this greater network complexity is driving growth in the server market.

Enterprise applications are evolving beyond simple client-server interactions to include far more compute-intensive services. According to KEMP, this greater network complexity is driving growth in the server market.

When the corporate networks of the 1990s ran slow, network managers had recourse to a tried-and-trusted remedy: ‘throw a server’ at the problem. After connecting-up a new server (or two) and migrating a bunch of users, all-round user experience would improve overnight (or more likely over the weekend).

Such procurement made fiscal sense at a time when the number of network users was increasing rapidly. Routinely buying-in servers also provided a way to keep QoS ticking over without incurring the higher cost of other new kit, and with the advent of blade servers, processing power could be boosted without concomitant rack-space expansion.

But even back then, the techies knew that a lack of servers was usually not the root cause of network QoS issues – indeed, adding physical machines could compound the problem. Yet even 20 years later, servers – be they rack-mounted, blade, or density-optimised variants – are still often the focus of attempts to derive greater value from the IT estate. (Of course, this irks server vendors who point out that today’s models are intrinsically designed to be much more efficient in terms of capex and opex than other network devices.)

When their procurement proposals are being scrutinised, IT managers are challenged over whether they should be sweating existing assets. Sometimes approval for new expenditure might
be contingent on finding quantifiable value gains from what is already up-and-running, especially in regard to the ever-accumulating server ranks. 

The problem in part is that servers are still too often seen by the board as just ‘big computers’ which should be made to deliver more than they are already doing. The thing is, often they can.

This is not to say that purchasing of new servers has entirely halted while organisations seek to work their assets into the ground. According to IDC’s Worldwide Quarterly Server Tracker,
in Q2/2015 vendor revenue in the global server market increased by more than six per cent year-on-year to $13.5bn. Server shipments worldwide totalled 2.29 million units during that period – an increase of 3.2 per cent when compared with Q2/2014. 

These may not seem soaring growth figures. But for a decades-old market where the latest multi-core CPUs can result in replacement units with double the compute power of their predecessors, and in which refresh cycles can now last for five years or more, it’s relatively profitable.

Driving this growth are the many and varied usages that servers support. For instance, server estate build-out is a compulsion for data centres housing cloud and other third-party services, while telecom firms need servers both to manage their own infrastructures as well as support CRM platforms as mobile comms expands into emerging global markets. 

Furthermore, cyber security threats, among other factors, are causing industrial control systems to be upgraded to mainstream IT standards, and what used to be classed as ‘business technology’ (e.g. Microsoft Windows) is now being installed in factories and plants. Looking further ahead, it’s predicted that IoT growth will create a requirement for the modern equivalent of ‘front-end processors’ – scaled-down servers that process and filter data flowing in from thousands of remote connected devices, sensors and machines.

“There are many drivers for server estate growth,” says Maurice McMullen, product manager at load balancing specialist KEMP Technologies. “Some of this is driven by greater application complexity, with applications growing beyond simple client-server interactions to include compute-intensive services, such as data mining.”

Next there’s the consolidation of applications that traditionally ran on dedicated appliances set apart from core compute platforms. Examples here include load balancers (of more below), firewalls, and intrusion detection/prevention systems. McMullen predicts that the demand for physical servers will grow as more network functions, such as data traffic switching, are delivered from virtualised x86 platforms rather than dedicated devices.

Stack attack

One area to start looking for potential server performance gains is with an inventory analysis of how a given network has been put together over, say, the last five years. You then seek out the hardware/software mish-mashes that may have come about as VM estates have been created across infrastructures that lack continuity and cohesion. 

Developments and re-developments come fast in the IT world, and this means that even technologies that are less than three-to-five years old can count as ‘legacy’ infrastructure, according to Peter Barnes, enterprise solutions director at Dell UK. He believes that around 2010, there were many server estates that had still not fully embraced the virtualisation opportunity, which gave network managers quick wins in terms of the trade-offs between heat, space and power, and management overhead cost reductions. 

“Additionally, as organisations started to go on that virtualisation journey, many did so by taking their legacy vendors with them. This has presented a challenge – both then and today – as now we are seeing large server estates that are heavily virtualised, but with fragmented and expensive underlying infrastructure.”

The move of functions like switching onto servers is evidence of a broader trend in the design and deployment of enterprise ICT, where traditional hierarchical infrastructural models are being rearranged by incoming methodologies such as SDN. Wieland Alge, Barracuda Networks’ EMEA VP and GM, says: “Traditionally, IT was built from the bottom upward – infrastructure, network, transport, application, etc. – one layer on top of the other. Today we tend to build IT top-down, because the applications now set the technological expectation and we have realised that there is no point in basing a business around applications that the [underlying] IT infrastructure is unable to properly support.”

The trend he describes is ultimately reshuffling the network layers somewhat, with servers becoming more than just data processing engines and assuming a greater hold on other functionality layers. Alge advises network managers to stay apprised of these developments because they are likely to inform supporting evidence when it comes to putting the case for new value-added servers or server management tools.

However, the “top-down” effect could also be contributing to the pressure on IT directors to extract more from their server estates. As Barnes points out, users now insist on a service that’s always available, with no maintenance windows and certainly no outage time. “IT departments are being tasked to ‘do more’ with either the same or a reduced budget. These teams need to find new ways to make their budgets go further, while also remembering that user expectations are higher than ever before.”

ElasticHosts says servers should be configured in a smarter way – figures suggest that on average, 50 per cent of their typical capacity remains idle.

ElasticHosts says servers should be configured in a smarter way – figures suggest that on average, 50 per cent of their typical capacity remains idle.

Max headroom?

Another contentious area of the debate surrounding maximising value from servers is utilisation margins. These are the load thresholds or ‘headroom’ made available to accommodate the highs and lows in demand that will occur throughout the working day (which, for servers in data centres hosting global services, is pretty much 24/7/365). 

This topic has long been argued in IT circles, both among users and vendors. Some say allowing servers to be utilised to their utmost at all times counts as another way to derive maximum value from that investment; others hold that utilisation is best kept within narrower margins, and that capacity which is often under-utilised for periods of time helps overall efficiency by being better able to cope with demandpeaks when they occur.

Richard Davies, CEO of cloud hosting/services provider ElasticHosts, thinks that smarter policies in the way server resources are configured point toward more value-yielding options for network managers. He says industry figures suggest that typically, servers may have 50 per cent idle capacity on average over a full, round-the-clock period. 

“This is because most approaches to provisioning servers are still based around fixed-size server ‘instances’ of VMs, provisioning capacity in large blocks, rather than the exact amount needed. This is wasteful for businesses and unfit to meet the future demands of enterprise computing. Modern servers need to be flexible, as the strain on applications and databases grows and shrinks throughout the day.” 

But Barnes says that before you even begin to look at opportunities to optimise server value delivery, you need to consider what kind of gains are desirable. 

“Efficiency gains and server performance optimisation are rather open-ended goals, and provide an overall objective rather than a breakdown of specific areas to improve. For example, if IT professionals bring SSD flash cache on board for compute, this increases an application’s performance without making a big investment. It also means customers can add more disks to a legacy storage array without having to spend more money on updating it.” 

However, he goes on to say that this could also involve using the latest technology which supports big memory footprints: “The industry at large now supports 64GB dual in-line memory modules, meaning that even more VMs [can be hosted] using fewer physical servers.”

An additional consideration for the ‘maximisation-by-utilisation’ camp is that by continuously running a server at full, or near-full, capacity, you run the risk of foreshortening the hardware’s life expectancy. So while more might be got out of a server in the short-to-medium term, it may have to be replaced or upgraded sooner than expected. 

Of course, the liabilities here may depend more on a given brand’s reputation for reliability than on factors intrinsic to the technology. But it remains a subsidiary point to bear in mind when making an overall evaluation of pros/cons of server value maximisation techniques. Furthermore, the liabilities may be heightened if a server is evolving toward a multi-function network device and ends up being prone to that old data communications bugbear –
the SPOF (single point of failure).

They also serve...

Aside from these issues, deliberating trade-offs between loading and service longevity can also turn out to be a red herring, according to some server experts. They say full data centre infrastructure loading and efficiency needs to be examined, and not just the servers. 

For instance, Barnes says the server world has become fairly efficient in terms of space, power and cost, and so the bigger savings opportunities now lie in networking and storage. Typically, these areas have not seen the same degree of technological advancement, and a higher number of proprietary products is to be found there.

With SDN and software-defined storage leading to data centre convergence, things are changing fast. “Storage and networking functions are collapsing back into servers, and it is that change that has huge potential to free-up space,” says Barnes. 

More traffic and security management functions moving onto servers sounds like good news for network managers under the cosh to sweat the assets. It should enable more network functions to be consolidated away from pricey, dedicated devices, and onto more affordable and versatile server-based environments. That means network managers could make budgets stretch further by allowing more functionality to be loaded on to fewer new physical machines.

At the same time, as is often the case with consolidation-based cost savings, network managers will have to be mindful of inadvertently heading toward what Barracuda’s Alge calls “the re-introduction of complexity”. He warns that this “nightmare scenario” is a very real possibility because consolidation and expansion can easily result in an additional, unplanned complexity that, while initially bringing gains in cost-savings and performance, can be much more demanding to manage going forward.

Balancing the load

Definitions of the term ‘load balancing’ have changed, and are continuing to change in line with other innovative developments in network traffic management. 

Traditional load balancer solutions aim to optimise resource use, accelerate throughput, minimise response time, and avoid overloading any one network resource. This is achieved by actively distributing IT workloads across multiple computing resources, including servers, PCs, storage devices and network links, so that the burden is shared. Load balancers have provided a profitable product solution niche in recent years for vendors such as A10, Barracuda, Cisco, Citrix, F5, Jetnexus, KEMP, Radware, amongst others.

Load balancing was once a function in which servers played a greater role, but in more updated enterprise infrastructures it can now involve dedicated hardware and software, such as a multi-layer switch or a DNS server process. 

According to Gary Newe, technical director at F5 Networks, load balancers have moved on from the basic connection distribution, to the class of device marketed by some vendors as application delivery controllers (ADCs). 

Newe explains that these operate at a higher level on the OSI stack, and can take over some server functions: “There are two types of network-based load balancers: those at Layer 4 and those at Layer 7. The main difference is that those operating at Layer 7 are usually some sort of application proxy, while those at Layer 4 are typically not capable of understanding the higher-level application information.”

The Layer 7 devices can be physical or virtual, and can greatly assist in offloading server infrastructure by performing typical server functions like SSL, compression, optimisation, DDoS mitigation, etc. 

Load balancing continues to change in line with developments in the application, server and network environments, and this creates some opportunities for server-based load balancing to enable network managers to do more performance fine-tuning.

Meanwhile, earlier debates around network load balancers versus application proxies versus ADCs have been rendered passé, according to KEMP’s McMullen. He says load balancers now need to be capable of delivering any application across any network topology as a physical or virtualised appliance. 

“The modern load balancer needs to support data centre environments that are highly automated, virtualised and, most importantly, dynamic,” he says. “Emerging technologies, such as SDN, can also be leveraged to provide deeper insight to the health of the complete application delivery stack. Modern load balancers must integrate with these new technologies.”

Newe agrees to a certain extent here and notes that the market is already seeing huge changes. But he adds: “We will see the rise in importance of ADCs in self-provisioning networks, SDN, and software-defined data centres, where it can provide key feedback on the current status of applications or the general well-being of the end-user experience.” 

Initiatives to ratchet-up server value delivery will, sooner or later, come up against an age-old inhibiting factor: availability of time. Network managers may be minded to set aside parts of their busy schedules to identify and deploy ways to optimise server operations, but when new exigencies crop-up, those tasks are prone to being relegated down the IT team job list, even though they could ultimately result in improved network operations.

New application delivery, troubleshooting, upgrades, responding to cyber security incidents – all of these will, alas, take priority over non-essential fine-
tuning of existing assets. As Barracuda’s Alge concludes: “Unfortunately these days, network staff often simply do not always have the time to focus on server maximisation – even when they know that there are real gains to be had.”