23 March 2026
ZEDEDA, provider of edge intelligence, and Submer, the end-to-end AI infrastructure company, have entered a strategic partnership to deliver rapid manufacturable modular, liquid-cooled edge AI infrastructure for high-density GPU inference in locations where traditional data centres are unavailable or impractical.
The joint solution combines Submer's full-stack AI infrastructure platform – spanning design, liquid-cooled compute infrastructure and deployment that supports ultra-high-density racks exceeding 100kW – with ZEDEDA's edge intelligence software platform, enabling customers to create, secure and operate edge AI anywhere in the world, and at any scale.
As AI workloads increasingly move from centralised cloud infrastructure to industrial and operational environments, organisations require high-density compute infrastructure that can be rapidly deployed outside traditional data centre facilities. Enterprises, service providers, and nations can now deploy fully integrated and validated high-density GPU inference infrastructure anywhere intelligence is needed – on factory floors, at energy sites, across telco aggregation points and in sovereign environments – without the constraints, cost, or lead times of traditional AI data centres.
Said Ouissal, CEO and founder of ZEDEDA, said: "As intelligence moves from the cloud into the physical world, the ability to run AI anywhere – in a remote factory, an offshore platform, or telecommunications networks – is a fundamental requirement. The world's most critical operations generate enormous volumes of data far from any data centre, and until now, the infrastructure to act on that data intelligently simply couldn't follow. Our collaboration with Submer makes that possible now.
“ZEDEDA's Edge Intelligence Platform ensures high-performance AI workloads at the edge are managed, secure, and scalable, and Submer's liquid cooling technology enables the high-density compute those workloads demand, even in the harshest global environments. Together, we are unlocking AI for the industries that need it most."
The companies plan to offer three modular form factors initially:
18 March 2026
TD SYNNEX has launched Cloud Insights – a tool designed to help Microsoft CSP partners in the UK turn detailed data on customer licences and usage into strategic conversations, unlock greater value with additional services and solutions, and drive profitable growth.
Built for partners, Cloud Insights enables organisations to deliver greater financial control and accountability across the customer lifecycle through data‑driven, executive‑level insights. Designed to help CSP partners differentiate their value proposition, Cloud Insights provides comprehensive visibility across Microsoft 365 and Azure estates, supporting stronger renewals through informed forecasting, budget governance, and spend predictability.
18 March 2026
Mythic has chosen memBrain neuromorphic hardware intellectual property (IP) from Microchip Technology’s Silicon Storage Technology (SST) subsidiary for its next-generation edge to enterprise Analogue Processing Units (APUs).
Mythic will utilise SST's SuperFlash embedded non-volatile memory (eNVM) bitcells to deliver high levels of analog compute-in-memory (aCIM) performance per watt. The partnership enables Mythic to achieve 120 TOPS/watt inference processing for power-efficient AI acceleration at the edge and in the data centre: Mythic's APUs are targeted to be up to 100 times more energy-efficient than conventional digital Graphics Processing Units (GPUs).
18 March 2026
Everpure, the storage and data management solutions provider, has announced Evergreen//One for FlashBlade//EXA and the upcoming beta of Everpure Data Stream to help organisations reduce cost and complexity barriers that can stall enterprise AI projects.
Evergreen//One (EG1) for AI now extends across FlashBlade//EXA, providing the massive performance, scalability and throughput required for large-scale training and inference. Complementing this, the Everpure Data Stream Beta–launching later in 2026–accelerates time-to-result by eliminating the friction of manual data movement with a direct, automated pipeline from data ingestion to inference.
Find out more



