What is Edge Computing?

What is driving edge computing? 

There are multiple applications from Enterprise technologies like manufacturing, oil and gas, and cell towers.  IT transactions from the retail and finance sectors. And there are millions of IoT connection points from cars, facility sensors, and home control devices. The emerging use cases like Fog computing, Software-defined networking, Predictive Maintenance, and Blockchain drive the demand for edge computing infrastructure.

Gartner recently noted “By 2022, more than half of enterprise-generated data will be created and processed outside of data centers, and outside of cloud.”

What does the Edge Infrastructure Deliver?

So, an Edge architecture delivers several benefits to the organization:

  • The first critical one is Low latency: By bringing the “client” and “server” computers geographically closer, latency is reduced significantly.
    • The latency and bandwidth costs associated with public cloud platforms can create performance problems for applications using machine learning and artificial intelligence.
    • For a delivery drone or smart car, a few hundred milliseconds of latency can result in a catastrophic outcome.
    • The new 5G technology’s millimeter waves and microwaves don’t travel far, and edge computing centers are needed to provide sufficient coverage.
  • An edge architecture provides increased resilience: By having multiple small computing centers, fault tolerance is improved with an N+X factor, thus increasing failover options economically.
  • Organizations demand scalability and faster deployment times: Because of an edge site’s reduced footprint, accessing adequate space and power resources becomes easier. They are also relatively less complex to build or even ship preassembled as a rolling data center by IBM, Dell, and HPE.  Edge computing allows organizations to scale up based upon demand incrementally. Allowing for a more flexible and agile approach to infrastructure for rapid growth.
  • The Edge provides unique Security benefits: While there are many facets to the term “Security” for the topic of Edge computing, the Edge can reduce the amount of sensitive data transmitted. Data can be anonymized closer to the source, thus protecting personally identifiable information and limits the amount of data stored in any one location.

Why is Edge computing infrastructure so important?

Edge computing is now processing time-sensitive data critical to logistics and financial transactions.  It also is processing sensitive business and individual (consumer) data.  So, you don’t want to be running your critical infrastructure from a broom closet, and you don’t want the janitor managing your servers.  You want to mimic the same (common) infrastructure as your core data center.

Gartner made this observation: 45% of all IoT data today is processed at the edge.

Edge computing requires the same physical requirements that a centralized data center has, just smaller.  There is networking, energy consumption from power, and cooling. Security, be it physical access or network access, the actual computing devices, and the facility, including capacity and building management.  It needs to be monitored, managed, maintained, and secured.  One thing that is different from the edge is the lack of on-site personnel to manage the upkeep.

What are Challenges of Edge Computing?

  • Managing the edge sites with the lack of trained staff?
  • Keeping IT systems and Facility in sync on the disposition of assets?
  • Avoiding sending someone hundreds or thousands of miles away to turn on a switch?
  • Maintaining the same levels of security, you do in the data center?
  • Understanding the what-if situations like “the janitor pulling the plug”?

What is Edge Computing Management?

Mapping the Edge

First, you need to take inventory with an automated discovery tool.  Then leverage it to check on what is sitting on the network regularly.  Those regular discovery scans need to be validated against your CMDB and DCIM asset database.  Once we have an accurate account of the physical and virtual systems, they need to be mapped from each workload to network connections, all the way through the power chain.  When shared with other groups, this mapping will let us understand the effects of planned and unplanned disruptions down to the level of a workload.  This mapping will also aid in service management's efficiency and alerting security of missing or unauthorized devices and applications on the network.

Keeping IT Operations and Facility teams in sync -

By extending your DCIM solution out to edge computing sites, you will understand what is going on with everything at that site.  You will have visibility to chillers, power supply, servers, network connections, sensors, other IoT devices down to the application workloads themselves.  With visibility to power, cooling, and human access details, down to the rack and server level, lets you fill in the blanks for facilities and their building management systems as well as IT operation and CMDB and service desk functions.  Enabling discovery scans, you will collect hundreds of metadata points on each piece of the infrastructure from cooling, power distribution, networking, compute, and multiple IoT sensors.  This data collected creates a single source of truth for BMS, ITSM, and Finance systems.

With all organizations working from the same, current data set, workflows can be coordinated across teams.  This coordination of workflows brings efficiency and improves service requests' speed and accuracy and critical events responses.  This common and up to date data set reduces the risk of errors and duplicated efforts.  The automated workflow function validates and provides an audit trail of task completions for billing and security purposes.

Maintenance for Edge Computing

Some vendors have remote capabilities in their products to detect and provide some level of self-healing.  Also, hypervisor technology provides the IT team some ability to manage application workloads remotely.  However, historically if any significant repairs, upgrades, or new installations were needed at a site, it meant “rolling a truck,” “flying someone in,” or relying on unskilled local personnel to perform the task.  Today a DCIM solution with a focus on Hybrid Cloud Computing can eliminate most of this.

DCIM is already monitoring power and cooling levels showing what is happening and the historical trends.  It lets you set thresholds for alarms to help resolve issues while minor, rather than require a forklift repair later.  Remotely you can track generators’ status, cooling devices, batteries, and servers – and there-in-turn the application workloads associated with those devices.  DCIM lets you run what-if-scenarios against power failures, server, and network disruptions.  With advanced warning, you can now initiate and coordinate multiple mitigation actions via automating workflows across various teams.  This allows you to correct the issue locally or start a migration of a workload or even an entire site.  With DCIM having mapped the dependencies of an application workload to its server, network connection, and power chain, you know what applications will be affected by a disruption in the infrastructure and plan accordingly.

DCIM’s remote power management controls enable you to remotely power on and off connected devices, reducing travel time or reliance on local staffing.  Advanced features of DCIM rely on Machine Learning and Artificial Intelligence to provide predictive failures, maintenance trends, and many other multi-variant calculations to improve the resilience and self-reliance of the edge.  DCIM can also leverage Augmented Reality to put trained resources at a site virtually.  With AR, you can now reliably leverage untrained local resources to assist along with your virtual presence.

Bell Labs predicts 60% of Servers will be located in an edge data center by 2025
https://www.missioncriticalmagazine.com/ext/resources/whitepapers/white-papers-2/TIA-White-Paper-Types-and-Locations-of-Edge-Data-Centers.pdf

Edge Computing Security

Securing an edge site has many challenges.  Sites that don’t reside in a facility with other people must entirely rely on the physical security of the building to protect and alert abnormal conditions:

  • Door and window locks
  • Fire suppression systems
  • Video and motion sensors
  • External shielding (covers and fences) for PDUs, cooling systems, and cables

Staffed sites still have concerns about nefarious intrusion and vandalism and the haphazard and unintended disruption from local personnel.  These locations need to be able to distinguish authorized and hapless access and activities.

Modern DCIM solutions add a security layer to remote edge sites by detecting changes in assets network status and abnormal power and thermal conditions.  DCIM’s discovery process can identify adds, changes, and moves to assets.  From here, we can validate against approved work orders and update various asset databases and CMDBs.  With this, you can confirm approved changes were made correctly, expected devices are not missing, and new, unexpected devices can be identified.  Out of compliance, changes can alert ITOps and Security teams of non-compliant changes for further investigation.

Also, with DCIM’s automated discovery, we can address several concerns around cybersecurity.  We can identify metadata beyond the physical asset because of the ability to identify metadata out of date and non-compliant firmware, software, and security patches.  Additionally, we can see access logs from physical and virtual servers to provide end to end audit data for security and compliance reporting.

Furthermore, DCIM, when integrated with 3rd party access control devices from security cameras, keypads, and IoT locking systems, can add a deeper layer of security.  DCIM, as part of an integrated management system, can process the monitored data, respond to threshold alarm instructions, and trigger workflows to lock and unlock doors and cabinets to appropriate personnel.

What is Nlyte’s Edge Computing Management?

Nlyte delivers monitoring, insight, and control to your edge computing locations.  Nlyte has evolved its Data Center Infrastructure Management software (DCIM) to integrate with Building Management Systems (BMS) >pick your favorite term BAS…< to provide a management solution that covers your entire critical computing infrastructure across the Hybrid Cloud.

Nlyte has 3 primary software applications, Nlyte Asset Optimizer (NAO), Nlyte Energy Optimizer (NEO), and Asset Explorer (NAE).  All three applications can and are deployed independently, but are fully integrated into what we refer to as Nlyte Platinum Plus.  Nlyte’s Asset Explorer, Asset Optimizer, and Energy Optimizer applications deliver the core functionality needed to address the items on your edge management to-do list.  These applications when integrated with our out-of-the-box connectors to technology partner like Automated Logic, ServiceNow, BMC, VMware and many others, provide the end to end management needed to support multiple edge sites as well as the core data center infrastructure.

Nlyte’s Asset Explorer ties into Asset Optimizer to provide automated discovery and inventory of assets and hundreds of metadata points on an ongoing basis.  The collected information is then shared simultaneously with the connected Business Intelligent systems to provide a single truth source across the organization.  Asset Explore also discovers out of policy activities. Through Asset Optimizer, it can trigger workflows related to security breaches, missing or unauthorized assets on the network, power or cooling anomalies, and software patch management concerns.

Nlyte’s Automated HDIM solution performs regular scans across the organizations’ network to discover, inventory, and catalog assets and then validate them against our DCIM asset database.

Nlyte’s DCIM’s discovery process can identify adds, changes, and moves to assets.  From here, we can validate against approved work orders and update various asset databases and CMDBs.  When tied into the automated workflow functions of DCIM, we can validate and audit task completions.

As I have mentioned, the Nlyte DCIM solution can prevent a great deal of road-warrior repair trips.  By monitoring power and cooling levels, we can see what is happening and the historical trends. Our DCIM system lets us run what-if-scenarios against power failures, server, and network disruptions.  Remotely we can track the health of a server and there-in-turn the application workloads associated with it.  DCIM software lets you set thresholds for alarms to help better resolve issues while remaining minor than waiting for a forklift repair.

Our DCIM controls allow us to remotely power on and off connected devices, thus reducing our reliance on resident or unavailable help.

Advanced features of Nlyte DCIM leverage Machine Learning and Artificial Intelligence to provide predictive failures, maintenance trends, and many other multi-variant calculations to improve the edge’s resilience and self-reliance.

Nlyte DCIM is positioned to leverage Augmented Reality to put trained resources at a site virtually.  With AR, you can rely on untrained local resources to assist with your virtual presence.

To secure the edge, Nlyte DCIM can integrate and control 3rd party access control devices from security cameras, keypads to IoT locking systems.  Nlyte DCIM can take monitored data and respond to threshold alarm instructions triggering workflows to lock or unlock doors and cabinets to the present personnel.

The discovery process can identify out-of-compliance changes such as changes to devices connected, expected devices but not responding (missing), and unexpected devices now connected.

Not only will the discovery process identify the physical asset status, but it can catalog metadata beyond the physical asset.  Nlyte can identify firmware, software, and security patches as compliant or out of date.  Additionally, we can see access logs from physical and virtual servers to provide end to end audit data for security and compliance reporting.