What is an Edge Data Center?
Data is changing how modern companies operate, powering software applications that support key processes and deliver crucial decision-making insights.
Much of the data processing that organizations rely on still occurs in the cloud. But the physical cloud data center may be located thousands of miles away.
Sending data back and forth to a central server can take seconds too long. For industries that require real-time data processing to make decisions, even a slight delay can prove costly.
As more companies find themselves in need of instant computing power, relying on traditional cloud networks isn’t always practical.
This is where an edge data center comes in.
Here we’ll take an in-depth look at what edge computing is, what edge infrastructure offers, and how you can use Nlyte’s DCIM software to manage your edge data center.
What Is Edge Computing?
Edge computing involves placing computing resources closer to the “edge” of where the data originates, whether it’s a factory floor or financial institution.
Processing data locally near the data source — on the device itself or an edge server — greatly reduces latency as it eliminates the need to send data back and forth to distant data centers.
As an example, factories may deploy remote cameras to monitor machinery. Processing data for hundreds or even thousands of these devices can lead to higher latency and drive up bandwidth costs.
Placing computing resources at the edge of a network not only helps companies process data faster, but also delivers significant cost savings by reducing bandwidth use.
What Is Driving Edge Computing?
The exponential growth of connected devices has created a demand for data processing and analysis that traditional cloud networks can’t handle.
Examples include the software applications that industries like manufacturing, oil and gas, and telecommunications use to automate key processes. Others include the hardware that the retail and wholesale sectors use to process transactions and update inventory in real-time.
Then there are millions of IoT (Internet of Things) connection points — facility sensors, health monitoring devices, home appliances, etc.
The emerging use cases like fog computing, software-defined networking, predictive maintenance, and blockchain are also driving the demand for edge computing data center infrastructure.
To meet the growing demand in real-time data processing and to reduce latency caused by network congestion, more organizations are turning to edge computing.Gartner estimates that 10% of enterprise-generated data is processed outside the cloud in edge servers and predicts this figure will reach 75% by 2025.
What Is an Edge Data Center?
An edge data center is a small facility located on the edge of a network. It delivers computing resources to connected devices over much shorter distances, allowing companies to reduce latency and improve user experiences.
Edge data centers typically process more time-sensitive data, while the larger data centers they connect to may be used for long-term data storage and heavier workloads.
Edge data center use cases include:
- IoT devices: Edge computing provides more efficient user interactions for IoT devices like health trackers, home appliances, and security systems.
- Autonomous vehicles: Self-driving vehicles need to collect and analyze large volumes of data. Manufacturers can deploy edge data centers to process this information.
- Agriculture: Farmers can deploy various sensors to monitor weather conditions and control factors like water use in real time to optimize output.
- Healthcare: Surgeons conducting remote surgery require extremely low latency, which an edge data center can provide.
- Manufacturing: Factories can turn to edge solutions to monitor equipment and improve product quality.
These are just a handful of use cases for edge computing as the applications are practically endless.
What Does an Edge Infrastructure Deliver?
An edge architecture delivers several benefits.
By bringing the “client” and “server” computers geographically closer, latency is reduced significantly.
The latency and bandwidth costs associated with public cloud platforms can create performance problems for applications that use machine learning and artificial intelligence. A few hundred milliseconds of latency for a delivery drone or smart car can result in a catastrophic outcome.
The new 5G technology’s millimeter waves and microwaves don’t travel far, and edge computing data centers are needed to provide sufficient coverage.
By having multiple small computing centers, fault tolerance is improved with an N+X factor, thus increasing failover options economically.
Because an edge data center has a lower footprint, accessing adequate space and power resources becomes easier. They’re also relatively less complex to build or even ship pre-assembled as a rolling data center by IBM, Dell, and HPE.
Edge data centers allow organizations to scale up based on demand, allowing for a more flexible and agile approach to infrastructure.
While there are many facets to the term “Security” for the topic of edge computing, edge data centers reduce the amount of sensitive data transmitted.
Data can be anonymized closer to the source, thus protecting personally identifiable information and limiting the amount of data stored in any one location.
Why Is Edge Data Center Infrastructure Important?
Edge data centers are now processing time-sensitive data critical to logistics and financial transactions. It’s also processing sensitive business and individual (consumer) data.
So, you don’t want to be running your critical infrastructure from a broom closet and you don’t want the janitor managing your servers. You want to mimic the same infrastructure as your core data center.
Gartner made this observation: 45% of all IoT data today is processed at the edge.
Edge computing requires the same components as a centralized data center, but on a much smaller scale. These include:
- Compute resources
- Storage devices
- Energy consumption
- Network equipment
- Cooling systems
- Power supplies
Everything needs to be monitored, managed, maintained, and secured. One thing that’s different from the edge data center is the lack of on-site personnel to manage the upkeep.
What Are Challenges of Edge Data Centers?
- A lack of trained staff to manage the edge sites
- Keeping IT systems and facilities in sync on the disposition of assets
- Sending someone hundreds or thousands of miles away to turn on a switch
- Maintaining the same levels of security as a central data center
What Is Edge Data Center Management?
Managing an edge data center involves the following:
Mapping the Edge
First, you need to take inventory with an automated discovery tool. Then leverage it to check on what’s sitting on the network regularly. Those regular discovery scans need to be validated against your CMDB and DCIM asset database.
Once we have an accurate account of the physical and virtual systems, they need to be mapped from each workload to network connections, all the way through the power chain.
When shared with other groups, this mapping will let us understand the effects of planned and unplanned disruptions down to the level of a workload. It will also aid in service management efficiency and alerting security of missing or unauthorized devices and applications on the network.
Keeping IT Operations and Facility Team Synced
By extending your DCIM solution out to edge computing sites, you’ll understand what’s going on with everything at that site. You’ll have visibility to chillers, power supplies, servers, network connections, sensors, and other IoT devices down to the application workloads.
Visibility to power, cooling, and human access details, down to the rack and server level, lets you fill in the blanks for facilities and their building management systems, as well as IT operations and configuration management database (CMDB) and service desk functions.
Enabling discovery scans, you will collect hundreds of metadata points on each piece of the infrastructure from cooling, power distribution, networking, compute, and multiple IoT sensors. This data collected creates a single source of truth for building management systems (BMS), information technology service management (ITSM, and finance systems.
With all organizations working from the same, current data set, workflows can be coordinated across teams. This coordination of workflows brings efficiency and improves service requests' speed and accuracy and critical events responses.
This common and up to date data set reduces the risk of errors and duplicated efforts. The automated workflow function validates and provides an audit trail of task completions for billing and security purposes.
Maintenance for Edge Computing
Some vendors have remote capabilities in their products to detect and provide some level of self-healing. Also, hypervisor technology provides the IT team some ability to manage application workloads remotely.
However, historically if any significant repairs, upgrades, or new installations were needed at a site, it meant “rolling a truck,” “flying someone in,” or relying on unskilled local personnel to perform the task. Today a DCIM solution with a focus on Hybrid Cloud Computing can eliminate most of this.
DCIM is already monitoring power and cooling levels showing what is happening and the historical trends. It lets you set thresholds for alarms to help resolve issues while minor, rather than require a forklift repair later.
Remotely you can track generators’ status, cooling devices, batteries, and servers – and there-in-turn the application workloads associated with those devices. DCIM lets you run what-if-scenarios against power failures, server, and network disruptions.
With advanced warning, you can now initiate and coordinate multiple mitigation actions via automating workflows across various teams. This allows you to correct the issue locally or start a migration of a workload or even an entire site.
With DCIM having mapped the dependencies of an application workload to its server, network connection, and power chain, you know what applications will be affected by a disruption in the infrastructure and plan accordingly.
DCIM’s remote power management controls enable you to remotely power on and off connected devices, reducing travel time or reliance on local staffing. Advanced features of DCIM rely on Machine Learning and Artificial Intelligence to provide predictive failures, maintenance trends, and many other multi-variant calculations to improve the resilience and self-reliance of the edge.
DCIM can also leverage Augmented Reality to put trained resources at a site virtually. With AR, you can now reliably leverage untrained local resources to assist along with your virtual presence.
Bell Labs predicts 60% of servers will be located in an edge data center by 2025.
Edge Computing Security
Securing an edge site has many challenges. Sites that don’t reside in a facility with other people must entirely rely on the physical security of the building to protect and alert abnormal conditions:
- Door and window locks
- Fire suppression systems
- Video and motion sensors
- External shielding (covers and fences) for PDUs, cooling systems, and cables
Staffed sites still have concerns about nefarious intrusion and vandalism and the haphazard and unintended disruption from local personnel. These locations need to be able to distinguish authorized and hapless access and activities.
Modern DCIM solutions add a security layer to remote edge data center sites by detecting changes in assets network status and abnormal power and thermal conditions. DCIM’s discovery process can identify adds, changes, and moves to assets.
From here, we can validate against approved work orders and update various asset databases and CMDBs. With this, you can confirm approved changes were made correctly, expected devices are not missing, and new, unexpected devices can be identified. Out of compliance, changes can alert ITOps and security teams of non-compliant changes for further investigation.
Also, with DCIM’s automated discovery, we can address several concerns around cybersecurity. We can identify metadata beyond the physical asset because of the ability to identify metadata out of date and non-compliant firmware, software, and security patches. Additionally, we can see access logs from physical and virtual servers to provide end to end audit data for security and compliance reporting.
Furthermore, DCIM, when integrated with 3rd party access control devices from security cameras, keypads, and IoT locking systems, can add a deeper layer of security. DCIM, as part of an integrated management system, can process the monitored data, respond to threshold alarm instructions, and trigger workflows to lock and unlock doors and cabinets to appropriate personnel.
What Is Nlyte’s Edge Data Center Management?
Nlyte delivers monitoring, insight, and control to your edge data center locations. Nlyte has evolved its Data Center Infrastructure Management software (DCIM) to integrate with Building Management Systems (BMS) to provide a management solution that covers your entire critical computing infrastructure across the Hybrid Cloud.
Nlyte has three primary software applications:
- Nlyte Asset Optimizer (NAO)
- Nlyte Energy Optimizer (NEO)
- Asset Explorer (NAE).
All three applications can and are deployed independently, but are fully integrated into what we refer to as Nlyte Platinum Plus. Nlyte’s Asset Explorer, Asset Optimizer, and Energy Optimizer applications deliver the core functionality needed to address the items on your edge management to-do list.
These applications, when integrated with our out-of-the-box connectors to technology partners like Automated Logic, ServiceNow, BMC, VMware and many others, provide the end to end management needed to support multiple edge sites, as well as the core data center infrastructure.
Nlyte’s Asset Explorer ties into Asset Optimizer to provide automated discovery and inventory of assets and hundreds of metadata points on an ongoing basis. The collected information is then shared simultaneously with the connected Business Intelligent systems to provide a single truth source across the organization.
Asset Explore also discovers out-of-policy activities. Through Asset Optimizer, it can trigger workflows related to security breaches, missing or unauthorized assets on the network, power or cooling anomalies, and software patch management concerns.
Nlyte’s Automated HDIM solution performs regular scans across the organizations’ network to discover, inventory, and catalog assets and then validate them against our DCIM asset database.
Nlyte’s DCIM’s discovery process can identify adds, changes, and moves to assets. From here, we can validate against approved work orders and update various asset databases and CMDBs. When tied into the automated workflow functions of DCIM, we can validate and audit task completions.
The Nlyte DCIM solution can prevent a great deal of road-warrior repair trips. By monitoring power and cooling levels, you can see what’s happening and track historical trends. Our DCIM system lets you run what-if-scenarios against power failures, server, and network disruptions.
Remotely you can track the health of a server and there-in-turn the application workloads associated with it. DCIM software lets you set thresholds for alarms to help better resolve issues while remaining minor than waiting for a forklift repair.
Our DCIM controls allow you to remotely power on and off connected devices, thus reducing your reliance on resident or unavailable help.
Advanced features of Nlyte DCIM leverage Machine Learning and Artificial Intelligence to provide predictive failures, maintenance trends, and many other multi-variant calculations to improve the edge’s resilience and self-reliance.
Nlyte DCIM is positioned to leverage Augmented Reality to put trained resources at a site virtually. With AR, you can rely on untrained local resources to assist with your virtual presence.
To secure the edge data center, Nlyte DCIM can integrate and control 3rd party access control devices from security cameras, keypads to IoT locking systems. Nlyte DCIM can take monitored data and respond to threshold alarm instructions triggering workflows to lock or unlock doors and cabinets to the present personnel.
The discovery process can identify out-of-compliance changes, such as changes to devices connected, expected devices but not responding (missing), and unexpected devices now connected.
Not only will the discovery process identify the physical asset status, but it can catalog metadata beyond the physical asset. Nlyte can identify firmware, software, and security patches as compliant or out of date. Additionally, you can see access logs from physical and virtual servers to provide end to end audit data for security and compliance reporting.
Schedule a demo today to see how you can use Nlyte’s DCIM software to manage your edge data centers and optimize their efficiency.