What Is a Data Center?
Tiers, Types, and More

A data center is a facility of one or more buildings that house a centralized computing infrastructure, typically servers, storage, and networking equipment.

In this world of apps, big data, and digital everything, you can’t stay on top of your industry without cutting-edge computing infrastructure.

If you want to keep things in-house, the answer is the data center.

Its primary role is to support all the crucial business applications and workloads that all organizations use to run their business.

In this article, we’ll break down exactly what’s in a data center, different types and tier ratings, crucial systems to maximize uptime, and how to find the right location if you’re planning to build one of your own.

The Role of a Data Center:
What Does a Data Center Do?

A data center is designed to handle high volumes of data and traffic with minimum latency, which makes it particularly useful for the following use cases:

  • Private cloud: hosting in-house business productivity applications such as CRM, ERP, etc.
  • Processing big data, powering machine learning and artificial intelligence.
  • High-volume eCommerce transactions.
  • Powering online gaming platforms and communities.
  • Data storage, backup, recovery, and management.

There are other examples as well, but the above are some of the most common use cases for businesses.

Of course, in 2021, you could just outsource all of the data processing to a third party, like AWS or Google Cloud.

But it’s not always easy for an enterprise to give another party access to the data, not to mention it’s often more expensive at scale.

According to a 2020 study, companies choose to use a data center over public environments to reduce costs, solve performance issues, or meet uphold regulatory requirements.

Chart showing reasons for moving to private cloud

What Is In a Data Center?

A data center houses everything required to safely store and process data for your organization (or your clients), including physical servers, hard drives, and cutting-edge networking equipment.

The infrastructure also includes external and backup power systems, external networking and communication systems, cabling systems, environmental controls, and security systems.

If you’ve ever visited a data center, it can often look and feel like you’re in a sci-fi movie. With the rows of servers, cooling towers, and the absurd amount of network cables, you could swear you were looking at The Matrix mainframe.

Today, when uptime as close to 100% as possible is expected, a data center often includes a smart control system. It optimizes cooling, climate control, and more automatically to optimize performance.

This is a Data Center Infrastructure Management (DCIM) system. It basically takes the same concepts as a smart house (automatic temperature control, etc.) to the next level.

If you never want your private cloud of applications and big data to be unavailable, it’s a necessity.

Data center servers lined up

Types of Data Centers

There are many types of data centers that may or may not be suitable for your company’s needs. Let’s take a closer look:

Colocation
A colocation center — also known as a “carrier hotel” —  is a type of data center where  you can rent equipment, space, and bandwidth from the data center’s owner.

For example, instead of renting a virtual machine from a public cloud provider, you can just straight-up rent a certain amount of their hardware from specified data centers.

Enterprise
An enterprise data center is a fully company-owned data center used to process internal data and host mission-critical applications.

Cloud
By using third-party cloud services, you can set up a virtual data center in the cloud. This is a similar concept to colocation, but you may take advantage of specific services rather than just renting the hardware and configuring it yourself.

Edge Data Center
An edge data center is a smaller data center that is as close to the end user as possible. Instead of having one massive data center, you instead have multiple smaller ones to minimize latency and lag.

When IoT devices and low-latency data demands are high, organizations are deploying Edge computing facilities.

Micro Data Center
A micro data center is essentially an edge data center pushed to the extreme. It can be as small as a small office room, just handling the data processed in a specific region.

Large enterprise data centers are still the most popular, but experts foresee continued growth in colocation and micro data centers.

Most popular data center types graph

Data centers are still viable assets for organizations, but as computing demands and the industry evolve, the enterprise data center is morphing into a hybrid computing infrastructure.

This modern approach encompasses the traditional data center, which typically houses mission-critical applications where maximum uptime and privacy is a must, sometimes called “the crown jewels”.

To meet the demands of tier 2 applications (non-mission-critical apps), organizations often leverage public cloud data centers. For example, many companies rely on third-party cloud services for their DevOps activities.

We also categorize data centers by tiers, based on their expected uptime and the robustness of their infrastructure.

Data Center Tier Rating Breakdown: Tier 1, 2, 3, 4

Companies also rate data centers by tier to highlight their expected uptime and reliability.

Let’s break it down:

  • Tier 1: A Tier 1 data center has a single path for power and cooling and few, if any, redundant and backup components. It has an expected uptime of 99.671% (28.8 hours of downtime annually).
  • Tier 2: A Tier 2 data center has a single path for power and cooling and some redundant and backup components. It has an expected uptime of 99.741% (22 hours of downtime annually).
  • Tier 3: A Tier 3 data center has multiple paths for power and cooling and systems in place to update and maintain it without taking it offline. It has an expected uptime of 99.982% (1.6 hours of downtime annually).
  • Tier 4: A Tier 4 data center is built to be completely fault-tolerant and has redundancy for every component. It has an expected uptime of 99.995% (26.3 minutes of downtime annually).

Which tier of data center you need depends on your service SLAs and other factors.

In addition to hardware, where you decide to build your data center can have a big impact on your results.

Choosing Your Data Center Location Is Crucial

Choosing the location of your data center is one of the most important decisions you’ll make.

Here are just some of the things you must consider:

  • Proximity to major markets and customers — Latency and reliable connections play a major factor in running an efficient facility that meets customer demand.
  • Labor costs and availability — While labor costs may be good in a particular region, is there enough talent (across disciplines) needed to run and maintain your data center?
  • Environmental conditions — Temperature and humidity variances wreak havoc on environmental systems and forecasting. Earthquakes, hurricanes, blizzards, and tornadoes are unpredictable and can shut down a facility indefinitely. Keep this in mind.
  • Airport and highway accessibility and quality — You need large equipment and service equipment to build and maintain the data center. It must also be readily accessible for delivery, services, and employees.
  • Availability and cost of real estate options — Build versus buy requires considering building costs and quality of construction, versus incentives from landlords and local governments
  • Amount of local and state economic development incentives — Beyond construction considerations, local jurisdiction may provide development incentives in rural or redevelopment areas, and less inviting in densely populated or over-resourced areas. On the counter side of this are the taxes and regulatory requirements that can be costly and restrictive.
  • Availability of telecommunications infrastructure — Make sure local providers can meet your future bandwidth demands and that there are not only redundant systems from your provider, but that you have multiple providers available
  • Cost of utilities — Costs vary globally and in some geographies you may not have an option of where you place your data center and considering alternative power sources is prudent and in some countries, required.

Data Center Physical Security:
How to Keep Your Data Safe

There are three important concepts to keep in mind when designing a policy to keep your data safe and available at all times — data security, service continuation, and personnel and asset safety.

Data Security
Data security systems include physical and telemetric systems, rigid security policy adherence, and highly available redundancy make up the data protection foundation. These protect against physical intrusion, cyber breaches, human, and environmental events.

Service Continuation
Set up the proper architecture of power and networking systems, including redundancy, disruption simulations, and automated workflows. That way, you can deliver on SLAs and protect yourself against unforeseen incidents.

Personnel and asset safety and preservation
Use proven data center design practices to monitor weight and power distribution, cable management, and alarm systems to alert before reaching safety thresholds.

Asset Integrity Monitoring:
Improve Your Data Center Security

Asset integrity monitoring is a cornerstone practice for any major computing infrastructure. It continuously monitors your system for anomalies, and alerts you immediately for power and environmental incidents.

Data center teams can use them to:

  • Reduce, predict, and plan for power and thermal anomalies.
  • Identify at-risk firmware and software.
  • Identify human errors outside security and CMP policies.
  • Detect unauthorized hardware or software on the network.

Operations and security teams benefit from increased visibility and a simplified audit process with an accurate asset data set.

  • Automated discovery of assets and attributes.
  • Traceable lifecycle management and workflows.
  • Logged user access, date, and time.
  • Identification of unknown and non-compliance hardware and software.
  • Critical incident and custom report queries.

What is a Hybrid Computing Infrastructure?

A hybrid computing infrastructure means using a mix of traditional enterprise data centers and public cloud infrastructure.

A hybrid computing infrastructure augments the traditional data center. It allows for optimizing application workload balancing, optimizing user experience and costs.

It also enables the adoption of new technologies from virtualization, high-density racks, and hyper-converged infrastructure equipment. (If you have no idea what any of that means, all the more reason to outsource some of your computing.)

A hybrid approach allows for any organization and management style to tailor their infrastructure that is right for their business. Conservative and security-focused organizations will keep critical applications under their watch in a physical data center owned and managed by their personnel.

For organizations that aren’t ready to invest the tens of millions to build or expand data centers, using a colocation provider is a great option for balancing risk and cost.

Where speed of deployment and short-term computing power is needed, the public cloud and SaaS deployments are ideal.

In use cases where latency must be as low as possible — for example, IoT or high-speed transactions — Edge computing is crucial.

How does Data Center Infrastructure Management (DCIM) Software Improve the Data Center?

DCIM bridges the gap between facilities and IT, coordinating planning and managing through automation and transparent communication, leveraging a “single source of truth”.

What does that actually mean? All the data and controls you need to manage your data center are available in one place. (And most of the time, it controls itself perfectly without any of your input.)

Asset Management
From the receiving dock to decommissioning, Nlyte DCIM maximizes the production value of your assets over time. Capturing change at its source, Nlyte DCIM facilitates timely onboarding of equipment at the time of receiving, and streamlines the decommissioning of older equipment.

Workflow Automation
Optimize your resources and personnel with measurable, repeatable, intelligent processes making individuals more efficient. Support cross-team assignment for multi-team tasks. Extend the adoption of ITIL and COBIT into the data center without any additional development or services.

Bi-lateral Systems Communication
Nlyte becomes your single source of truth for all assets, sharing information between Facilities, IT, and business systems.

Infrastructure and Workload Optimization
Designed to support your operation efficiency goals and reduce the number of ad-hoc processes at play in your data center. Unlock unused and under-utilized workload, space, and energy capacity to maximize your ROI.

Space and Efficiency Planning
Forecast and predict the future state of your data center’s physical capacity based on consumption management. “What if” models forecast the exact capacity impact of data center projects on space, power, cooling and networks.

Risk, Audit, Compliance, and Reporting
Power failure simulations and automated workflow reduce the risk of the unknown and human error. Audit and reporting tools improve visibility and help achieve compliance requirements.

Conclusion

Even as the world of cloud computing continues to grow with stricter regulations and higher customer expectations, we’re seeing a return to the data center, often in a network of smaller “Edge” or “micro” data centers.

If you’re looking to start your own data center, and you want to maximize uptime and efficiency, Nlyte can act as the brain of your data center, managing your cooling towers, climate systems, and more to optimize performance and equipment longevity.

Book a demo today to see what the brains of the data center of the future looks like.