09 February 2018
Network virtualisation and software defined networks will become “inevitable” in 2018, according to FNT.
The Germany-headquartered global provider of integrated software solutions for IT management and data centre infrastructure management (DCIM) reckons any organisations that have not yet adopted these technologies will be in a rush to do so.
It says that as products have reached maturity and become easy to implement, there is no longer any reason for organisations not to do so, with continuing growth in network traffic being the biggest driver for the move.
While NFV and SDN present one set of challenges for the data centre network manager (and more on those later), IT management specialist SolarWinds also warns that hybrid or ‘mixed’ infrastructure will become more of an issue. Destiny Bertucci, the company’s ‘head geek’, says: “The push to be within the cloud is increasing. In turn, the visibility IT professionals have into certain areas of infrastructure that are now hosted off-premises by cloud service providers has been decreasing. This presents a problem for both IT and data centre management because without a complete picture, security holes and misses will become more frequent.”
But for Mark Gaydos, CMO with DCIM provider Nlyte, the biggest network management challenges for data centres this year will revolve around greater transparency and overall efficiency. “The increasing number of assets being added to the data centre, from network servers to switches and even racks, is adding to the complexity of the setup. This is having an inverse effect on network performance.”
Gaydos says the only way to overcome such problems is by having a full view over each and every component within the estate, including networking components, so that performance can be monitored and improved upon. And that means a dedicated DCIM platform. He says: “Any DCIM solution worth its salt has the ability to manage and monitor data centre LANs, including network devices, ports, connections and cabling. It can also be supplemented with an integrated discovery solution to track real-time changes in a local area network.”
While in the past there had been talk in some quarters of the industry about DCIM’s days being numbered (see ‘Keeping the data centre at the centre of data’ feature, Jan 2017 issue), nothing could be further from the truth for the commentators that we spoke to this time around.
For example, SolarWinds is in no doubt that DCIM solutions still cut it when it comes to managing and monitoring data centre LANs.“By definition this covers all tools – software to hardware – that help you to organise and manage your data centre, says Bertucci.
“But the trick is to look outside of your comfort zone of tools and see what you are missing. You will need to add to your DCIM [platforms] so that you are covering your needs. This means you should evaluate your data centre and understand the health metric you need to monitor to make sure it is healthy.”
Following on from the issue of the ‘mixed’ infrastructure that she mentioned above, Bertucci’s advice is to invest in hybrid IT management software solutions that allow users to gain complete visibility across your entire data centre. But she goes on to point out that when data centre managers look at their monitoring screens and see green, they should not immediately assume that things are running smoothly. “Just because a device is up does not mean it is healthy. Ultimately, the user expectation and experience need to be addressed both on and off-premises.”
FNT is likely to agree here. Oliver Lindner, the company’s head of business line DCIM, says: “The same pressure that is on data centre operations to reduce cost, optimise capacity use, and increase agility is also true for network operations. To reach these targets, improved visibility across the stack and across silos is needed. Only DCIM products can provide that visibility. End-to-end visualisation (including all supporting devices such as passive components and cables) is critical for impact assessment and root cause analysis.”
According to Lindner, using DCIM tools will further benefit organisations by enabling them to continuously re-optimise all aspects of a data centre including power, cooling, networking resources and physical space.
He adds that improving utilisation can also help enterprises achieve greater energy efficiency, leading to reduced operating expenditure and delay – or even avoiding capital expenditure altogether.
The bottom line is that there is no alternative for DCIM. “Efficient and mature data centre operations require suitable management products to complement operations tools,” says Lindner. “You cannot operate an agile and secure network environment or a Tier 3 data centre by relying on spreadsheet documentation and planning.”
New network tech
What impact, if any, will newer technologies such as SDN and NFV have on managing data centre LANs?
Bertucci is in no doubt that training will be vital for everyone in IT for 2018. “The speed of learning these new technologies and their integration has an impact. Implementation of new features and loads, etc., on any type of infrastructure often causes IT organisations to play catch-up when it comes to learning. There are standard practices and tweaking that can produce unexpected outcomes and possible vulnerabilities if not fully vetted.”
For Lindner, the broader use of SDN/NFV in all areas inside and outside the data centre, organisations will inevitably require updated operational procedures.
“Not only do these new technologies need (and on a positive note, also enable) new processes, but in most cases legacy technology is still in use. Organisations need to prepare for at least a temporary ‘dual mode’ operation which in itself requires processes to adjust to the situation in order not to lose all the advantages offered by the new technologies.
“Most organisations will not opt to ‘rip and replace’ across the entire network to implement these new technologies. Rather, they will opt for a smooth transition, making the dual mode operation inevitable.”
He continues by saying that dealing with the challenges of dual mode operations, i.e. operating SDx/NFV and legacy equipment in parallel, will require management products that can deal with both types of technologies.
“Separating the planning and management for these technologies would lead to errors and inefficiencies in roll out and operation. Thus, a centralised database that documents all network assets and connections across all the data centre networks is a necessity. This database should be dynamically updated as change occurs and provide planning capabilities with ‘what-if’ scenarios, so all network teams are accessing the same accurate, up-to-date data.”
Lindner also points out that SDx/NFV is moving ‘closer’ to IT and often running on regular, standardised IT equipment, ‘linking’ operations and management. “Both elements must be together to be effective and efficient. The minimum would be a joint database, so that the same data/information is available for both teams. Changes in IT related to network and network requirements will impact IT.”
A change in IT operations also has a big impact on network management. For instance, managing workloads in different locations by IT teams is becoming very agile, and those loads are also moving back and forth between sites. Lindner says this has a big impact on data flowing over the network and capacity usage between sites. “Some organisations have already introduced automatic methods to move workloads on their private cloud platforms. When it comes to planning and performing a network capacity expansion, there needs to be a general shift from technology thinking to business views (platform thinking) with a strong focus on agility and innovation first.”
Tracking inventory from dock to decomm
In recent months, there has been a spate of new network management products announcements from specialist vendors.
For example, last autumn, both Nlyte Software and FNT both unveiled new management software within weeks of one another.
Nlyte was first with the launch of Nlyte 9.0 in October. The company says the newest version of its DCIM platform is completely redesigned from the ground-up to streamline and personalise the user experience for more efficient operational processes in data centres and colo facilities.
At the same time, it is said to add robust support of REST APIs across the entire solution for improved interoperability. The firm says this enables developers to perform requests and receive responses via HTTP protocol, such as GET and POST, customising the way Nlyte 9.0 extracts information from homegrown management systems, BMS, ITSM systems, or other financial applications. Nlyte reckons this unique ability improves the capabilities to track equipment from ‘dock to decomm’ – i.e. from the moment it arrives on the dock to when it is scheduled to be decommissioned.
As well as increasing scalability, the new DCIM solution can also be operated on any browsers or mobile devices that support HTML 5. Some of the other features that are said to be unique to Nlyte 9.0 include multi-level historical breadcrumb navigation. Nlyte says this allows for easy views of recently visited pages with “deep recall” navigation displaying the user’s last data changes in an overlay page that can be saved. There’s also pervasive contextual search across the solution, enabling users to easily locate relevant data in any given screen or process, plus easier integration into other systems and applications.
Towards the end of 2017, FNT announced the latest release of ServicePlanet 4. As a centralised portfolio and service management database, the vendor said its software improves the efficiency and standardisation of business services throughout the entire lifecycle. It said the latest version provides complete transparency into enterprise service management, enabling organisations to design, manage, and monitor products and services.
New features include automated, detailed service history. FNT says this improves change handling processes as related service tree changes and business processes can be taken care of via a defined method. Other features include the monitoring of administrative operations, automated status models, HTML front-end views, independent work plan scopes, and flexible attribution functions. The new software also provides automated documentation of all available products and the configured services for greater clarity into the customer-supplier relationship.
According to FNT, ServicePlanet 4 it increases the speed and flexibility with which enterprises can provide their services. By focusing on service lifecycle management and change handling, the company said it had laid the foundation for future development of its business scope service management. According to the firm, the provisioning of standardised products makes it possible for organisations to consistently deliver high-quality services while reducing costs and IT complexity.
Meanwhile, Paessler has introduced network monitoring-as-a-service that combines its PRTG Network Monitor with what it describes as the “flexibility, economy and security” of the cloud.
Using feeds from more than 200 pre-configured sensors, Paessler says PRTG’s “highly customisable” dashboards reveal precise information, from real-time intelligence on overall network performance and health, to granular details such as the temperature and capacity levels of individual servers. The platform also integrates with custom sensors, including those used for IoT-connected devices, via a “straightforward” API.
With PRTG in the cloud, the firm says netadmins gain the inherent resiliency, speed and security of Amazon’s Elastic Compute Cloud (EC2) platform. No monitoring server or licence is required – users simply add PRTG’s Remote Probe into their network and within 60 to 90 seconds it’s claimed they are ready to see what’s happening across their entire IT infrastructure in real-time. Once a subscription is activated, Paessler says it handles everything needed to run PRTG, including updates and regular backups of the client’s unique PRTG configuration and historical data. It adds that the service is easily scalable and flexible – in the event of new locations being added, such as through an acquisition, additional IT infrastructure can also be immediately monitored.
Earlier this year, Digitronic Software launched new DCIM software in an effort to help data centre operators identify potential savings, optimise performance, and minimise operational risk by safeguarding reliability.
Digitronic Software is a joint venture between cooling specialist Stulz and software development and control systems firm, Digitronic Automationsanlagen. It says CyberHub ECO.DC enables 3D visualisation to assist with planning and optimisation of room layout, as well as mapping of temperature profiles. Other critical features include alarm management, with integrated escalation management, and individual status reports.
The company says the software has an easy-to-use operator interface that can be used to collect a wide range of data from air conditioning systems, temperature and pressure sensors, UPS systems and PDUs. It goes on to claim that rapid access to crucial information enables data centre operators to be responsive to events and avert potential problems before they become an issue, thereby helping to avoid risk of outages.
The software is said to offer built-in compatibility with a range of common protocols in the data centre sector, as well as BMS such as Modbus, MBus and SNMP. The firm adds that existing systems can be easily linked via an interface.
CyberHub ECO.DC is available in a fully autonomous local server version that is VM compatible or as a globally available SaaS. Digitronic says the latter is hosted in Germany and therefore conforms to strict data protection regulations. Communication between the base server and the customer interface is encrypted for additional protection, and the overall DCIM solution is DIN 50001 certified.
Last year, MPL Technology claimed it had come up with a unique new platform to address many of the challenges that data centre owners and operators have been experiencing in the DCIM area.
The firm, which bills itself as a global specialist in problem-solving and innovation in mission-critical environments, said N-GEN is a culmination of 10 years data centre operational, consultancy and engineering experience spanning a number of multinational blue chip organisations. “We have found that our customers appreciate being able to grow at their own rate and use the relevant DCIM modules only when they need them,” stated MPL. “This means that their DCIM implementation path directly reflects their business needs step by step, and this is the best way that we can provide value to their organisation.”
It said that because every data centre requirement is different, N-GEN can be customised to suit. The platform integrates third party software, legacy systems, and legacy data centres for a comprehensive view of the estate. MPL claims its expertise in developing hardware and software alongside the relevant services means it can provide an all-round service. The firm reckons this combination allows comprehensive, high-quality problem-solving capabilities in the IT and mission-critical infrastructure arena.
MPL goes on to boast that as a “pioneer” in the design, development and manufacture of PDUs, it can deliver the “most expansive” range of solutions to cater for a wide range of power management needs. The company states that its line-up of products serves both facilities and IT and aligns them in the dashboard and reporting, resulting in “smarter business decisions with no gaps in management reporting”.
What to avoid
With various platforms available in the market, what are the pitfalls to avoid when looking for network monitoring and management solutions? SolarWinds’ Bertucci warns against choosing a “one sided solution” as this gives you a tool that only provides insight in one area.
“It may do it very well for one particular vendor, but it is a great solution for this niche only. You are then left with having to piece together other one-off solutions for areas where you have a lot of metric, but ultimately no way to correlate them all to use in your favour.
“For me the better option is a multi-vendor solution. This may not go into the exact detail, but it does give you the metrics needed for a healthy data centre. It is best to have reportable and clearly stated tools that can give you true baselines and represent your infrastructure as a whole.”
FTI is likely to support this view when it says organisations adopting new tools and technologies should avoid any lock-in situation – that includes both vendor lock-in using proprietary products, as well as internal lock-in where neighbouring departments that should be included in planning and operations are overlooked. Lindner adds that solutions need to provide well-documented interfaces (preferably REST APIs as its new software does) to enable integrations and end-to-end workflow automation.
Nlyte’s Gaydos says the biggest pitfall to avoid first and foremost is poor planning. “An organisation without a clear goal will fail. Even with the best solutions in mind and extensive training, if a proper use strategy has not been put in place, organisations will struggle to integrate the new solutions and not benefit from the improvements.”
He says a further pitfall comes following installation, in the shape of monitoring. “Without the correct data centre infrastructure management software, organisations will not be able to build on the new systems and see where further network improvements can be made.”
What about security – how can such solutions deal with the ever present threats of cyber attacks and data breaches?
“Baselines within monitoring tools are a fantastic way to back a security plan,” says Bertucci. “If you have a baseline, you are able to see anomalies that would have gone under the radar. This allows you to quickly address the issue and then setup thresholds for future events.
“Monitoring tools also provide you with feedback from patching or upgrading around vulnerabilities. You have a ‘before solutions’ baseline, one during, and one after, which allows you to showcase if the vulnerability ‘fix’ degraded within any metric you are monitoring.”
Lindner also points out that tools need to be integrated and that teams need to cooperate.
“It is very difficult to identify a breach at a specific point as most attacks today use complex mechanisms. Simply monitoring a single firewall will not work. Next generation products also support ‘cloud based’ analysis allowing equipment to learn from incidents happening in other sites.
“With all the focus on technology and tools, we should not forget people. Enabling and training staff for the new technologies and making them aware of the need for closer cooperation between the teams will be essential for a successful implementation. Also, a new skillset and a new way of thinking is needed for 21st century agility.”
What’s clear in all of this is that network monitoring and management is vital for data centre operators and that DCIM should be chosen as the tool of choice.
As Gaydos concludes: “DCIM is more than just a ‘nice to have.’ It can reduce the cost of running a data centre, provide better visibility over the entire estate, and is often the first line of defence against downtime. With more services now relying on the smooth running of the data centre, DCIM has become the ‘must have’ in the fight for market supremacy.”