A data center is a facility of one or more buildings that house a centralized compute infrastructure, typically servers, storage, and networking equipment. The infrastructure includes external and backup power systems, external networking and communication systems, cabling systems, environmental controls, and security systems.
Its primary role is to support the multiple instances of applications and their workloads that all organizations use to run their business. Data centers remain viable assets for organizations but as computing demands evolve the data center is morphing into the hybrid compute infrastructure. This modern approach encompasses the tradition data center which typically houses the mission critical application, sometime referred to as “The Crown Jewels”.
To meet the demands of tier 2 applications organization are leveraging the public cloud data centers for their less critical applications and DevOps activities. Where IoT devices and low latency data demands are high organization are deploying Edge compute facilities.
Choosing Your Data Center Location
- Labor costs and availability – while labor costs may be good in a particular region are there sufficient talent to meet all of the various skills needed to run and maintain your data center?
- Environmental conditions – Temperature and humidity variances wreak havoc on environmental systems and forecasting. Earthquakes, hurricanes, blizzards, and tornados are unpredictable and can shut down a facility indefinitely.
- Airport and highway accessibility and quality – Large equipment and service equipment is needed to build and maintain the data center, it needs to be readily accessible for delivery, services, and employees.
- Proximity to major markets and customers – Networking costs and latency play a factor in running an efficient facility that meets customer demand.
- Availability and cost of real estate options – Build versus buy requires considering building costs and quality of construction versus incentives from landlords and local governments
- Amount of local and state economic development incentives – Beyond construction considerations, local jurisdiction may provide development incentives in rural or redevelopment areas, and less inviting in densely populated or over resourced areas. On the counter side of this are the taxes and regulatory requirements that can be costly and restrictive.
- Availability of telecommunications infrastructure – Ensure your future bandwidth demands can be met and that there are not only redundant systems from your provider, but you have multiple providers available
- Cost of utilities – Costs vary globally and in some geographies, you may not have an option of where you place your data center and considering alternative power sources is prudent and in some countries required
Data Center Tier Rating Breakdown: Tier 1, 2, 3, 4
- Tier 1: A Tier 1 data center has a single path for power and cooling and few, if any, redundant and backup components. It has an expected uptime of 99.671% (28.8 hours of downtime annually).
- Tier 2: A Tier 2 data center has a single path for power and cooling and some redundant and backup components. It has an expected uptime of 99.741% (22 hours of downtime annually).
- Tier 3: A Tier 3 data center has multiple paths for power and cooling and systems in place to update and maintain it without taking it offline. It has an expected uptime of 99.982% (1.6 hours of downtime annually).
- Tier 4: A Tier 4 data center is built to be completely fault tolerant and has redundancy for every component. It has an expected uptime of 99.995% (26.3 minutes of downtime annually).
Data Center Physical Security
Data Security – physical and telemetric systems, ridged security policy adherence, and highly available redundancy constitute the data protection foundation. These protect against physical intrusion, cyber breaches, human, and environmental events.
Service Continuation – proper architecture of power and networking systems including redundancy, disruption simulations, and automated workflows deliver on SLAs guarding against unforeseen incidences.
Personnel and asset safety and preservation – appropriate data center design practices are used to monitor weight and power distribution, cable management, and alarm systems to alert prior to safety thresholds being breached.
Asset Integrity Monitoring – Improve Your Data Center Security
Asset Integrity Monitoring is a cornerstone practice for any Compute Infrastructure providing, continuous monitoring maintaining an accurate data source for anomalies, and alerting to for power and environmental anomalies. Data Center teams are able to:
- Reduce, predict, and plan for power and thermal anomalies
- Identify at risk firmware and software
- Identify human errors, outside of security and CMP policies
- Detect unauthorized HW/SW on the network
Operations and security teams benefit from increase visibility, simplified audit process, with current accurate asset data with:
- Automated discovery of assets and attributes
- Traceable lifecycle management and workflows
- Logged user access, date, and time
- Identification of unknown and non-compliance HW/SW
- Critical incident and custom report quires
What is a Hybrid Compute Infrastructure?
A hybrid compute infrastructure augments the traditional data center. It allows for optimizing application workload balancing cost and user experience. It also enables the adoption of new technologies from virtualization, high-density racks, and hyper converged infrastructure equipment.
A hybrid approach allows for any organization and management style to tailor their infrastructure that is right for their business. Conservative and security focused organizations will keep critical applications under their watch in a physical data center owned and managed by their personnel.
For organizations that aren’t ready to invest the tens of millions to build or expand data centers, the colocation provider is a great option balancing risk and cost. Where speed of deployment and short term compute power is needed the public cloud and SaaS deployments are ideal. When sheer speed of data is needed for IoT or high-speed transactions Edge computing is now essential.
How does Data Center Infrastructure Management Software Improve the Data Center?
DCIM implementation bridges the gap between Facilities and IT coordinating planning and managing through automation and transparent communication leveraging a Single Source of Truth.
From the receiving dock to decommissioning Nlyte DCIM maximizes the production value of your assets over time. Capturing change at its source, Nlyte DCIM facilitates timely onboarding of equipment at the time of receiving through to the decommissioning of older equipment.
Optimize your resources and personnel with measurable, repeatable intelligent processes making individuals more efficient. Support cross-team assignment for multi-team tasks. Extend the adoption of ITIL and COBIT into the data center without any additional development or services.
Bi-lateral systems communication
Nlyte becomes your single source of truth for all assets sharing information between Facilities, IT, and business systems.
Infrastructure and workload optimization
Designed to support your operation efficiency goals and reduce the number of ad-hoc processes at play in your data center. Unlock unused and under-utilized workload, space, and energy capacity to maximize your ROI.
Space and efficiency planning
Forecast and predict the future state of your data center’s physical capacity based on consumption management. “What if” models forecast the exact capacity impact of data center projects on space, power, cooling and networks.
Risk, audit, compliance, and reporting
Power failure simulations and automated workflow reduce the risk of the unknown and human error. Audit and reporting tools improve visibility and help achieve compliance requirements.