By Michael Ruhe, Product Manager of Mitsubishi UPSD
We’ve all seen it happening: the number of connected devices is growing at an ever-increasing rate. In 2021, the number of connected devices was estimated to be 12.3 billion and is expected to more than double to 27 billion by 2025 (Satyajit, 2021).
Each of these devices are consuming or transmitting data in some way, and with that, comes ever-
changing user expectations and data usage. This can be someone at home streaming a new TV show or a manufacturing plant adding IIoT (Industrial Internet of Things) devices to monitor and optimize their operations.
So, how are companies trying to address the issue of this intensified data usage? Enter edge data
centers.
Edge data centers are named so because they live on the periphery or “edge” of a larger network away from traditional hubs or interconnects. The objective of having these data centers at the “edge” is to –
- Reduce the latency issues that stem from the transmission distance associated with
communicating to servers in distant data centers - Move the processing of time-sensitive data as close to the end-user as possible
Because of the broad scope of what the “edge” is and what needs are trying to be met, edge data
centers can vary drastically in size and implementation. They can be anything from single server racks in a manufacturing plant all the way up to a multiple megawatt colocation facility in an underserved city.
One approach to deploying an Edge Data Center is through the use of containerized or pre-fab data centers, “data centers in a box,” if you will. These solutions take all the familiar data center equipment, such as servers, racks, uninterruptible power supplies, switchgear, etc. and places them in a self-contained and environmentally controlled enclosure.
A containerized edge data center can be a great fit for small- to mid-size applications and remote locations where building a structure to house equipment can be difficult.
Over the course of this three-part series, we will examine some of the advantages and considerations when deploying a containerized data center solution to address the needs of the edge.
A containerized approach provides significant benefits with regards to speed of deployment and time to market. The construction of the module, or container, occurs off site, usually at a partner integrator. This would occur in parallel with any other construction work that needs done at the site – or even eliminate the need for any additional structures to be built.
Standardizing a container design also affords the ability to quickly replicate the same build at multiple sites, reducing the time required for planning and designing each location. If one was to forecast their needs, the containers could even be pre-built by the integrator to maximize the speed to deployment.
In addition, the time required for installation and integration can also be reduced. Since the module or container is pre-built off site, most of the testing and verification of the included systems can be performed immediately following assembly and any issues can be resolved more easily at the integrator’s site; i.e., the module can then arrive on site ready to be energized and quickly integrated with any other equipment.
While these are just a few of the advantages that can be realized with taking a containerized approach, there are many more that can make this an attractive option for edge data center deployments.
In parts two and three of this series, we’ll discuss some of the considerations for selecting the
equipment that would be installed into these containerized data centers – a piece of the puzzle that is just as important as build design.
In the meantime, be sure to check out our Data Movers Podcast with Mitsubishi UPSD’s Jim Hughes as he talks Green Technology, The Age of Resignation, Edge Data and more!