26 September 2014
Enterprise network managers have never had so much choice when it comes to fulfilling their role. IAN GRANT looks at the pros and cons of outsourcing IT networks and the Darwinian evolution of the marketplace.
The issues for and against outsourcing are well known: amortisable capex becomes pay-as-you-go opex; someone takes care of the growing complexity in return for you giving up some operational control; it is more efficient (for many tasks) and so cheaper; you can concentrate on your business rather than be distracted by a non-value-adding activity (unless you see IT as a competitive weapon); you can scale your capabilities smoothly rather than in expensive steps; you gain access to specialist services and skills that are too costly for you alone to justify; and so on.
The question is who can you trust? Are you better off with an in-house operation in which the insider threat is great? Or a good contract with a reputable outsource vendor? The answer to all of that depends on the application.
At some point this must run on physical devices which consume space, time, energy and other resources. Ownership of the assets means direct expenses as well as hidden costs such as skills, compliance, efficiency, and opportunity. These all need to be managed for cost, risk, security and financial return. But it all changes so fast you can barely write off your investment before the next tech wave is on you. It’s no wonder that some firms decide to stick with what they’ve got until it breaks, while others give the problem to others and sharpen their contracting skills.
Simon Campbell-Whyte, executive director of the Data Centre Alliance, thinks there’s a better way: “This is not a game where you can afford not to play. You might be all right for a year or two, but as the technology moves on, you are going to lose your competitive position.”
Access to power and to carrier networks have become limiting factors in the data centre (DC) business. Even so, Campbell-Whyte says the entire DC market has been growing 20-30 per cent a year for the past six years, and expects about one-third devoted to outsourced services by 2016.
A KPMG survey of 490 outsourcing deals worth more than £10bn published late last year confirmed this trend. Some 77 per cent of respondents said they intend to continue or increase their level of outsourcing. “Forty-five per cent claimed they will ‘certainly’ or ‘probably’ increase IT outsourcing over the next 12 months, a figure that has more than doubled from 19 per cent, last year,” said the firm. It believes that clients have two main motives. Firstly, there’s a desire to improve customer service; and secondly, there’s a dearth of skills to make it happen.
This fits in with Campbell-Whyte’s view. He sees a quickening evolution of the outsourcing sector, with data centre providers developing specialities.
It’s already happening. For example, Telehouse has chosen to host network operators; RackSpace is happy to share its machines with third parties; Virtus will share its DC floorspace and make a virtue of the number of carriers that connect to its sites; and Salesforce hosts applications in the cloud – all you need is internet access and a password.
Campbell-Whyte points out that everything turns on the application, which varies by its need for speed, resilience, security and requirement for specialist skills. This drives the choice of platform and its location. For example, a network for a high frequency trading house will look, feel and behave differently to one designed for a national point of sale network. As a result, vendors are honing their offers according their perception of market needs and their own preferences and capabilities.
According to KPMG, the emerging picture is one of hybrid in-house and outsourced resources where fitness for purpose rules. This is good. In research commissioned by content distribution network Akamai, Forrester found that organisations that leverage hybrid as well as a mix of cloud services which bring differentiated and complementary value are more likely to be highly satisfied.
This suggests that Darwinian specialisation to fit IT ecological niches is working. Instead of user organisations piling all their applications into a glass room and prioritising their job runs, they can now be selective and optimise where and when they run tasks.
Fundamentally, networks are just the means to the end, which is the conversion of data into meaningful content by an application. There is nothing per se to say that companies need to build, own and operate their own physical networks or DCs. Indeed, when it comes to telephony, email and even websites, very few do, and then usually only because of security, health and safety or regulatory/compliance issues. Even private networks use the public network infrastructure at some point, hence the growing angst over the public networks’ fitness for purpose.
The user’s point of view
For user-facing CIOs, the only thing of real interest is the end-to-end performance of the applications. The yardstick is no longer how much latency there is in the network, or jitter, or lost packets, but whether the user had a good experience. Those network performance measures may indicate why the user is happy or unhappy, but they are now diagnostic tools rather than themselves primary indicators of network quality. In this case, perception has become reality. Users simply don’t care about network issues, and arguably many resent being made to care. They just want it all to work.
Developments such as cloud computing, BYOD, and mobile access mask the underlying networks. These retain their basic physical divisions of data centre level, local and wide area networks, and all the issues that surround them. Once users become aware of those issues, they become grit in the smooth passage of their lives.
Even though “we are all IP now”, the differences in data centre, metro, aggregation and core networks at the physical level are material and, thanks to virtualisation technology, getting more complex. Smart CIOs will look at the supply chain that delivers their applications’ end-to-end performance, consider the risks and dependencies that exist, and address them in SLAs. That could mean a different supplier at each of the ISO model’s seven layers.
Take for example Cegedim Rx, one of the market-leading suppliers of healthcare software to the UK pharmacy market. It uses managed network operator Redcentric to provide the network that allows Cegedim Rx to add managed data backup and 3G failover services to its offer to pharmacies (see News, Jul/Aug issue). The software house’s sales and marketing director Clive Eckett said Redcentric was chosen in 2009 for its reliable connectivity to N3 – the NHS internal network. “While the early focus was all on taking our network to a new level, we were always aware that we could leverage additional Redcentric managed services to strengthen our own commercial proposition.”
Redcentric is one of 130 companies in the UK that has Ofcom “code powers” or the right to dig up the streets to lay communications cable. But in reality, most of those 130 will buy backhaul from another carrier with capacity in the neighbourhood, and provide only the “last mile” connection. In Cegedim Rx’s case this means that the pharmacy owner sits at the top of a minimum five-layer supply chain that includes (in descending order): Cegedim Rx; Redcentric; a wholesale supplier/mobile network operator; and a physical infrastructure supplier.
Amazon considered such a multi-layered supply chain a risk when it launched its 4G-based Kindle Fire HDX multimedia e-book. Knowing that the device would drive up demand for data, the company wanted a flawless one-click connection to get its customers buying goods and content as soon as they had powered up. It also wanted to streamline the process and to save costs.
Working with Vodafone’s enterprise division, Amazon developed a solution that uses a single 4G SIM globally and ships pre-installed in the Fire HDX. Once activated, the Vodafone SIM becomes ‘local’ (subject to network availability). This means users can quickly sign up to local data packages and access top-up services and support. This hugely simplified Amazon’s logistics and supply chain, resulting in cost and management savings.
While not all firms are Amazon, the need to provide a good user experience is common to all online businesses.
To forestall those concerns for UK enterprises, hosted cloud provider C4L recently signed a five-year agreement with long-term partner Virtus. C4L has a national 100Mbps MPLS network that connects to more than 100 UK DCs and 300 internationally. It will deploy 36 cabinet pods at each of Virtus’ London1 and London2 data centres.
C4L says the two new dedicated pods will deliver up to 36 cabinets at a minimum of 108kW, with the flexibility to increase the density to more than 300kW per pod per year. London1 already has 20 Tier 1 and Tier 2 networks connected to it while London2, which opens this month, will have connections to four Tier 1 operators.
Both centres peer with all the major UK and European IP exchanges. C4L says this connectivity allows its customers to be up and live instantly, while allowing it to control their positioning in the data centre and cater for their growth. C4L CEO Simon Mewett says: “Agreements like this [with Virtus] originate from shared values and we can already see the positive effects it is having through the ability to roadmap not only our growth but facilitation of our customers’.”
Capability was also key to the decision by PVH Corporation, owner of lifestyle brands such as Calvin Klein, Tommy Hilfiger and Speedo, to outsource a new 400-site European network that connects retail outlets in 22 countries. The deal went to MDNX, which now trades under the Easynet brand following its takeover of Easynet late last year. The amalgamation created what Easynet claims is the largest independent networking and hosting integrator in Europe.
The Easynet European network will provide the backbone of the PVH retail network, harmonising the businesses’ IT systems, streamlining communication, aiding collaboration, and reducing management complexity across the organisation. It will also allow PVH to execute plans to invest in its supply chain and to roll out new software across the organisation.
Meanwhile, Capita IT Services, which specialises in public sector offerings, is now offering a range of Microsoft productivity applications in a secure,
UK-based environment via the government’s G-Cloud 5 framework.
The Capita Productivity Hub gives public sector users access to tools such as Outlook, Lync, SharePoint, Word, Excel and PowerPoint on premise, in a public or private cloud or a hybrid combination.
Capita IT Services executive director Peter Hands says: “Public sector organisations want the ability to use the cloud delivery model that suits them, but without the headache of managing it. Capita Productivity Hub offers applications via a private cloud infrastructure with all data stored in Capita’s UK data centres.”
A common platform
Socitm, the professional body for those involved in IT and digitally enabled services for the public sector, is exploring a more ambitious target. After rejecting an earlier proposal to have a single web portal for all local government applications as both impractical and possibly illegal, it is examining how sharing a common platform could reduce waste, duplication and inefficiency.
It points to previous shared projects that have been successful. One is Connect Digitally, the central government programme to help councils transform schools admissions and free school meals into digital services with high take-up and online payments. According to Socitm, more than 80 per cent of English schools have adopted the online admissions module, and time spent checking applications for free school meals has dropped from three hours to three minutes.
Another success is the Planning Portal, which was established by central government to support local planning authorities and their customers as they shift to paperless planning. According to Socitm, some 60 per cent of planning applications are now received and processed online.
It goes on to say that this is what can be achieved when stakeholders, including private sector systems suppliers, come together. By working closely together, they can identify and agree minimum features and functions for each service, agree standards for data formats and common features, and develop quality tools and products for adaptation by councils and schools.
Clearly there is scope for more such cooperation, and to allow the Public Service Network (PSN) that connects local government agencies to develop into a network of networks, possibly provisioned and run by a specialist operator. Whether it would be possible or even desirable for that to happen, remains to be seen. In all likelihood, any development along those lines will start with the applications.