Bitmain Bitcoin Mining Data Center in Texas

All About the Bitmain Data Center in Texas


Back in July 2018, the largest Bitcoin miner in the world, Bitmain, made an announcement that they would be constructing an all-new data center and mining facility in near Austin TX in Rockdale, Texas. The company planned to invest more than $500 million in United States dollars over seven years into the state, county, and local economies. This blockchain data center in Milam County was intended to be a prime part of the company’s strategy to expand and invest in North America.


All About Bitmain


Bitmain was founded in 2013 and acts as a semiconductor and blockchain company dedicated to designing and creating high-performance software, hardware, and services for communities and customers worldwide. Bitmain offers some of the most industry-defining technologies to support financial tech and help support a global, decentralized network to grow, connect, and benefit everyone involves. While the company is headquartered in Beijing, it also boasts research centers, product centers, and offices globally in spots like Singapore, Zug, Silicon Valley, Hong Kong, Tel Aviv, and Amsterdam.


The Plans for a Texas Data Center


Based on statements released by Bitmain, the facility was expected to begin operating in 2019 and would be operational on a complete scale within two years. The data center was slated to be placed in an abandoned aluminum smelter plant and to bring hundreds of new jobs the struggling area. This was intended to create more disposable income in the area to boost property and business values throughout the county.


Bitmain also noted that it would be partnering with various educational institutions and schools in the county to offer residents and students the skills needed to choose a career at the facility. The company nearly immediately announced recruitment needs for positions like sales associates, engineers, senior management, research associates, finance associates, and technicians.


New Updates on the Blockchain Data Center


Unfortunately, less than a year after this facility was announced by Bitmain, development of what was known as “the largest data center in the world” has been halted in Rockdale, Texas. This is a serious blow to the city which has been hit hard following the closure of its coal mine more than a decade ago. The unemployment rate in the city at one point hit 12.5% during the recession.


The last year was not great for cryptocurrency mining hosts and many Bitcoin miners who got involved in 2017 stepped away in 2018 as the cost of mining Bitcoin became overly expensive. Bitmain started a hiring freeze in November 2018 and early in 2019 explained that the facility would no longer be opened. This happened after the job fair planned for the facility was delayed time and time again.


What Might Be Coming Next


While Bitmain will not be coming to this Texas community, it remains to be seen if other facilities will be closed or employees will lose their jobs. As for now, the company seems to be doing well despite any of its struggles and it will be interesting to see what comes next.



Underground Data Center Trend Is Starting to Gain Steam

Underground Data Center Trend Is Starting to Gain Steam

The need for data centers has been growing for a number of years, and this need is continuing today. However, there are many data center operators who are looking for a new way to create facilities. Rather than buying and retrofitting a building that’s above ground, many are starting to look at some of the benefits that could come from heading underground. With an underground building for the data center, it will still be possible to create quality multitenant data centers.

Below the Earth

Going underground is one of a few still relatively new data center trends, of course. There are many who are content to take structures that already exist or to build new structures, aboveground. Heading below the earth does have a lot of benefits to it, though.

It was only a few years ago that Iron Mountain started to build their underground facility in Pennsylvania. The data center is located in what used to be a limestone mine. The data center itself is 220 feet underground, and it features 1.7 million square feet of space. This is a large and secure structure that can support up to 10 megawatts of critical power. It provides carrier connections and acts as a powerful, technologically advanced data center.

Despite being underground, the company has the ability to create exactly the type of facility they want. They can provide prebuilt client space. They can create custom data center options for their clients, and they can offer a range of services. By offering both space and services, and by going underground, it has helped them to stand out.

What Makes Underground Facilities a Great Idea?

There are actually a few reasons that underground data centers are gaining in popularity. One of the biggest of those reasons is security, naturally. When there are fewer entrances and exits, and when the building is surrounded by the earth, there are fewer places that are at risk of a physical intrusion. Combine the fact that it is underground with security guards, cameras, and biometric scanners, and it helps to make it a very secure location.

Another one of the reasons that it might be a good idea to consider building underground data centers is because of the cost. Even though the company will have to find an old mine, as in the case of Iron Mountain, or they will have to pay for the construction costs, it is still often easier than finding a location of similar size above ground in an area that will work well. It will physically look more secure, as well. This can act as a psychological advantage for those companies with clients that need to have the best security possible.

In addition to safety from thieves, there is also the fact that an underground facility tends to be less at risk for many types of natural disasters. There are other advantages, as well. For example, because they are underground, they will have a lower ambient temperature and the advantage of geothermic cooling. Essentially, this means the facility will stay cooler naturally, which can cut down on the amount of energy used to keep the servers at an optimal temperature. This provides more efficient use of power.

Those who are looking for a data center that is truly as safe as possible will want to consider the benefits of going underground. Of course, this doesn’t mean that all of the data centers of the future are going to be heading underground. It’s just an option that can be considered.


Data Center Trends Facts and Statistics

2019-2020 Data Center Trends

  • 2020-2022 10% of IT organizations will be using using this serverless computing
  • AI will continue to grow 34% Year over year into 2020
  • Disaster recovery service demand grew 23% in 2019
  • The average tier 1-2 data center uses uses 24k miles of network cable
  • 2019 saw 562 Hyperscale data centers existing
  • AI is reducing the cooling costs which make up 40% of Data Center costs
  • 28% of cloud spending is focused on private cloud hosting

Technology advances at rapid rates, and this means that data servers are growing in ways that might have been unexpected even just a few years ago. When you start to look at some of the various trends that are making their way to data centers, you can see that there are some fundamental changes coming down the line. It’s about more than just the cloud and providing solid infrastructure. It is also about the future of that infrastructure, along with delivering new services that will make their customers happier and their data safer. Let’s look at some of the biggest trends that are coming in 2019 and beyond.

Serverless Computing

This is an interesting piece of technology that could become very important in the field of data servers and colocations. Rather than using hardware, this system makes use of the function platform as a service, which is a software architecture pattern. It will allow for rapid scaling, as well as more accurate billing that is reflective of actual usage. This type of technology is just now starting to come into the forefront, and it is likely that between 2020 and 2022, there will be around 10% of IT organizations using this serverless computing method. This could provide some big changes to the way that data colocation centers are operating.

Artificial Intelligence

Artificial intelligence, also called AI, is not just science fiction, and it hasn’t been for some time. However, the AI that is starting to be used today is better than it has ever been, and it is only a matter of time before AI becomes used in data servers and other types of infrastructure. AI will be able to help with elements like failure recognition and predictive analysis. There is hope that it will allow for better control of data and infrastructure without the need to increase the actual number of staff members.

Improved Security

Security is just as important as it has always been, but the hackers and thieves out there who might like to cause havoc are not letting up anytime soon. Therefore, there needs to be constant vigilance when it comes to digital security and encryption for all of the data that is stored in data centers. This is true no matter where they might be located.

Physical security is important, as well. This includes ensuring that there is plenty of fire protection, as well as protection from thieves. Having protection from natural disasters is also important, and for these reasons, some companies have even started to create underground data centers. This can help to reduce the amount of risk and it can help to speed up data recovery in the face of a problem caused by a natural disaster.

Could Traditional Data Centers Disappear?

One of the questions that many people have when it comes to all of the new and advanced types of technology is whether data centers will be able to stay the course. The truth of the matter is that data centers are likely still going to be needed in one capacity or another, but their function might change somewhat. Rather than having their own server rooms and data centers on site, you are likely going to see more and more moving their operations to data centers that are offsite and/or in the cloud.

These are some of the potential trends that you are likely to see over the course of the next year or so. Of course, these are just some of the trends and changes that you are going to want to watch for.




Tier III vs Tier IV Data Center – What’s the Difference?

Tier III vs Tier IV Data Center – What’s the Difference?

Whenever you are searching for retail colocation space, you want to be sure that you are getting only the very best. You want a good US location, you want great services, and a great price, naturally. However, you will have quite a few choices that you will have to make, and you will need to understand what type of data center will be best for you. One of the questions that you will have to answer is whether you want to have a Tier III or a Tier IV data center. If you are still new to colocation, you might be wondering what the difference is between these designations for data centers.

When you start to look for data centers today, one of the first things that you will see in their advertising materials tends to be whether they are a Tier III or a Tier IV facility. Of course, most people do not know what this really means or whether it makes much of a difference or not.

How Do the Tier Certifications Work?

As the tier certifications rise in number, they become stronger and more secure. A Tier III data center tends to be a good choice for many different types of large companies. They will generally have a guaranteed uptime availability of 99.982%, and their annual downtime is about 1.6 hours. The Tier III will also be N+1 fault tolerant and able to provide at least 73 hours of power outage protection.

A Tier IV data center is the strongest of all of the options, and that means it will have the least probability of failing and not being available for you and your customers. It will have a guaranteed uptime availability of 99.995%, and it will have an annual downtime of only 0.04 hours. These are 2N+1 fully redundant infrastructures, which is the main difference between these and Tier III facilities. They have 96-hour power outage protection.

Essentially, even though the Tier IV data centers might seem like they have very few differences, the differences they do have are important. Though they seem small, the improvements are major. However, that does not mean that choosing a Tier IV option is always the right solution for a number of reasons.

What Is the Most Common and What’s Right for You?

Now that you can see the major differences between a Tier III and Tier IV data center, it is time to think about which one will be right for you. The true Tier IV data centers tend to be quite rare still, and you will find that most of the options available are Tier III. For most large companies, the Tier III data centers are perfectly fine. The Tier IV data centers are generally best for massive enterprise options.

What you do not want to do, though, is to go backward on the tier certification list if you can help it. If you have a very small company, it might be possible to use a Tier I or Tier II facility, but they are prone to far more downtime and other issues to contend with.

Whenever you are making a decision to find and choose a data center, take the time to determine the location of the data center, the Tier they claim to be, the services and prices they have available, and whether they are fully managed or not. You want to make sure they are going to be a good solution for your business, and that they will provide your equipment and data with the protection that it needs.


Data Center Certification Standards

In today’s world of business, computers are all the rage. Without them, after all, nobody could buy things online and there wouldn’t be free email accounts.

The management, dissemination, and storage of digital information is known as information technology, or IT for short. Since taking care of IT needs in-house is so expensive, businesses often outsource their IT needs to data centers.

Data centers are facilities dedicated to hosting, managing, and maintaining any and all equipment that businesses use to handle their information technology needs.

What do data centers consist of?

There are four main components of data centers: the facilities themselves, support infrastructure, equipment, and operations staff. Let’s go into greater detail about what each of these four components entails.


Data centers need facilities to store and operate the equipment they need to meet their clients’ IT needs. These facilities include the buildings that actually house such equipment and staff, HVAC systems used to heat and cool data centers, and the infrastructure used to supply utilities like electricity and Internet bandwidth to these facilities.

IT equipment

All the hardware that is directly responsible for hosting websites and otherwise performing IT support functions falls under this category. This includes the racks, cabinets, and cables to store such hardware and keep it operating.

Operations staff

People working for data centers who aren’t directly responsible for implementing and maintaining data center equipment fall under operations staff. Members of data centers’ operations staff include managers and executives.

Support infrastructure

The hardware, software, and people used to keep IT infrastructure up and running fall under the category of support infrastructure. Businesses and organizations need to have around-the-clock access to support functions from their data centers. Otherwise, they might as well take care of their IT needs in-house.

How can businesses and organizations know that data centers are doing their jobs?

Just how organizations like SACS, the Southern Association of Colleges and Schools, oversee American colleges and universities in the Southeastern United States, there are several governing bodies that oversee data centers and make sure they provide the services they claim to provide and keep clients’ information safe.

The rest of this article consists of a discussion about the current top data center certification standards and the governing bodies that maintain such standards in the United States and some other countries.

SAS 70 Type II data center certification

SAS 70 stands for Statement on Auditing Standards Number 70: Service Organizations, an auditing standard that was put into place by the AICPA, or the American Institute of Certified Public Accountants.

SAS 70, in short, is a standard that provides oversight and guidance for independent, third-party auditors to assess data centers’ service controls and provide them with a professional, unbiased opinion on them.

Internal controls, at least as far as auditing, oversight, and regulation are concerned, are policies and procedures that – put simply – make sure things are running like they’re supposed to. Internal controls are usually referred to simply as controls.

Although SAS 70 has been phased out of being legally required since 2011, many data centers still pay auditors to provide them with SAS 70 Type II reports and opinions. SAS 70 reports include three things:

  • An in-depth, thorough description of the tests that auditors use to affirm the effectiveness of their operations.
  • An in-depth, thorough description of any and all internal controls that are currently being used to keep things running as they’re supposed to.
  • Most importantly, the professional, unbiased, independent opinion of the auditor regarding whether their internal controls are doing what they’re supposed to or not.

SAS 70 Type II reports are used by data centers to find out if their internal controls could be improved or if they’re doing a good enough job already. Further, these reports are often passed on to current and potential clients of data centers. Most data centers make SAS 70 Type II reports widely available to the world by posting them online or otherwise making them readily available to people – not just clients – who are interested in reviewing them.

One thing that’s important to keep in mind regarding SAS 70 audits is that there are no standards that must be met. Rather, auditors determine whether data centers are running as well as they should be on a case-by-case basis.

SSAE 16 data center certification

The AICPA is also responsible for overseeing SSAE 16 data center certifications. SSAE 16 stands for the Statement on Standards for Attestation Engagements Number 16, which was first published in early 2010.

SSAE 16 has largely replaced Statement on Auditing Standards Number 70: Service Organizations as the go-to standard for certifying the effectiveness of data centers’ operations here in the United States. However, as mentioned above, keep in mind that SAS 70 is still regarded highly by the likes of data centers, auditors, and the potential clients and existing stakeholders of data centers.

SSAE 16 is the United States’ equivalent of the international standard ISAE 3402, or the Assurance Reports on Controls at a Service Organization, which was implemented by the International Auditing and Assurance Standards Board, or the IAASB.

In order to gain SSAE 16 data center certification, data centers must subject themselves to three main things. The first of these is that they must provide auditors with a list of any and all systems they use as internal controls. Further, data centers must also thoroughly describe these internal controls.

The second thing data centers must object themselves to is providing auditors with a full description of their overall systems. SAS 70 guidelines only call for descriptions of facility internal controls. The difference between these two things – overall systems and facility internal controls – is that the latter contains a broader variety of internal controls.

Lastly, auditors have to sign off on a statement of assertion, which has to be composed by the managers of data centers themselves. These statements of assertion, more or less, consist of various pledges that data center managers agree to hold themselves to.

If it’s not already clear, the SSAE 16 data center certification supersedes its SAS 70 Type II counterpart, though many data centers hire auditors to hold them to both of these standards.

SOC Types 1, 2, and 3 data center certifications

The AICPA also provides regulatory guidelines called Service Organization Controls for data centers. Also known as SOCs, Service Organization Control reports fall under the categories of Types 1, 2, and 3.

Service Organization Control reports cover five distinct, well-defined areas of internal controls: privacy, availability, processing integrity, security, and confidentiality. Without getting into the definition of each of these fingers of coverage, just know that they collectively make up the focus of the AICPA’s Trust Services Principles and Criteria.

We won’t be going over the differences between the types of Service Organization Controls. Rather, we’ll discuss the basics of SOC Type 2 data center certifications, which are the most comprehensive and stringent of the three, some of which has already been discussed above.

Service Organization Control Type 2 compliance addresses organizations that provide information technology and cloud computing services. The AICPA’s Attestation Standard 101 is used extensively in auditing data centers to determine whether they are certified under Service Organization Control standards.

Auditors must sign off on written statements of assertion, which are written directly by the managers of data centers. These statements must contain descriptions of the data center systems that data centers use to operate.

LEED data center certification

The United States Green Building Council developed and oversees LEED data center certification, which standards for Leadership in Energy and Environmental Design.

In the United States, data centers aren’t required to uphold themselves to LEED standards, though many clients of data centers prefer them to be held to LEED standards.


The Uptime Institute defines four tiers for data centers to be classified under. Tier 1 data centers usually provide services to small businesses. They also have to be available for support at least 99.671 percent of the time, which equates to an annual downtime of just 29 hours.

Tier 2 data centers must be available 99.749 percent of the time, equating to an annual downtime of precisely 22 hours. They often provide services to mid-sized and small businesses.

Tier 3 data centers can only be out of service for 1.6 hours each year, must be fault-tolerant, and able to sustain operation throughout a three-day power outage.

The largest businesses and organizations outsource their data center needs to Tier 4 data centers, which are only down a maximum of two-and-a-half minutes each year. This means they’re available at least 99.995 percent of the time. They also have to sustain at least two independent utility paths, be able to stay open throughout a 96-hour power outage, and be fully redundant.

Datacenter Redundancy N+1, N+2 vs. 2N vs. 2N+1

While it might be one of the most annoying things on earth when it comes to a parent being asked “are we there yet” a dozen times within a 30-minute ride, redundancy is everything for data centers. Redundancy systems can prevent outages from having a negative impact on a data center’s reputation, reliability, business operations, and ultimately their financial bottom line. What does redundancy entail? Why’s it important? What are the redundancy system options for a data center? Let’s explore.

What Is Data Center Redundancy And Why’s It Necessary?

So, what is redundancy in technical engineering terms? It’s basically a failsafe or backup in the form of a system’s critical components/functions being duplicated in ways that bolster the system’s reliability. In other words, the system is more likely not to fail when the primary source of power goes out.

Now, how does redundancy apply to data centers? The focus here is on the amount of spare power needed to provide customers a backup power supply when power outages occur, which is a huge causative factor in datacenter downtime. No data center likes to hear the word “downtime,” but it’s a reality that happens to data centers across the U.S. every year.

A 2013 study on data center outages by the Ponemon Institute surveyed 584 entities with some operational responsibilities for data centers. The findings showed the following statistics:

• 85 percent experienced loss of primary utility power within the last two years.

• Unplanned outages were experienced by 91% of the above respondents.

• The average data center shutdown time averaged two hours.

• Of those two-hour shutdowns, the downtime averaged over 90 minutes during each failure.

Why do outages happen? Most will immediately say weather, right? But, weather isn’t the only source of outages. From internal or external equipment failure to someone hitting a power line with heavy outdoor equipment, the possibility for an unplanned outage means that failure can happen to any data center and at any time.

Outages can cost data centers significant revenue losses, particularly if they’re driven by internet sales that require continual connectivity. Per hour of downtime each year, the average data center loses $138,000 in revenue. For big business like Amazon, those lost revenue numbers soar to over $1,000 per downtime second.

Redundancy Matters For Data Centers

The Ponemon study goes a long way in speaking to the importance of data centers everywhere implementing DCIM (Data Center Infrastructure Management.) If revenue matters, then downtime matters. If downtime matters, then prevention matters. If prevention matters, then redundancy is a must.

Larger entities generally have their servers at Tier 3 or Tier 4 data centers who counter unforeseen power outages with “sufficient” redundancy power systems. That said, not all these redundancy systems are equal in terms of value and protection. Look to how the system is set up to determine the degree of failsafe protection it offers, specifically whether the redundancy system is a N+1, 2N, or a 2N +1 setup.

N+1, 2N, and 2N+1 Redundancy Systems: What’s The Difference?

N+1 System

Imagine that you’re having a party with 20 invited guests. You’d naturally need 20 plates, right? That’s your “n” value. But, what if someone brings an unexpected guest or someone shows up out of the blue? The +1 accounts for extra plate you may need.

In data center language, it’s called a parallel redundancy – the number of UPS modules you need for essential connected systems.. plus one, and it offers a UPS (uninterruptible power supply) that is available 24/7. The result is a decreased chance of downtime.

N+1 systems do have redundant equipment, but the system is still operating on a common feed/circuit on at least one common point. This fact leaves the system open to failure. A fully redundant system has completely separated feeds as a failsafe. So, the degree of protection here is better than no failsafe, but things could be better.

2N System

Again, you’re having a party with the 20 guests. What a 2N does is double the expected number. So, you’d have 20 extra plates for your party, not just one.

For data centers, a 2N redundancy system means double the equipment that’s needed for essential operations. Each run separately with no possible common points of failure. It’s a fully redundant system capable of offering an independent failsafe should an extended power outage happen and you need to keep things running. Think of it like having a spare car ready to roll if your primary vehicle gets a flat tire.

2N+1 System

This redundancy system is a combination of the above two. It doubles the amount of equipment needed and also has an extra piece for good measure. It’s the most comprehensive redundancy system, but most data centers find the 2N to be adequate and more financially friendly.

The Largest Data Centers in the USA

America’s Massive Data Centers

Data Centers are massive infrastructures that serve as a home to a network or a company’s most critical systems. These centers host an astounding amount of data and information, which are significant in ensuring smooth and optimal daily operations. They lodge computing resources as well as critical telecommunications which may include the following—servers, databases, storage systems, software, applications, and access networks.

Companies use these data centers to collecting, processing, storing, filing, and distributing large amounts of data, which cannot be accommodated in a regular office site. The top ten facilities listed herewith are the largest data centers by square foot for the United States. Although there are many impressive facilities strung across the country, they all fail in comparison to the size of the data center found hereunder.

  • Digital Realty-Lakeside

Topping the list is one of the oldest and largest data centers in the world. Built in multiple stages as early as 1917, this fortress takes up 24 million square feet of data center properties. Digital Reality manages 145 sites throughout the globe. Their biggest site is impressive at 1.1 million square feet located at the Lakeside Technology Lakeside Technology Center in Chicago, covering an entire city block within the southern loop of the Central Business District. This facility was designed by Howard Von Doren Shaw to b the corporate headquarter of printing giant RR Donnelley and Sons. It contained the printing presses that produced the popular Yellow Book (or pages) and the Sears Catalog. It is a Gothic type of architecture fortified by 4,675 steel-reinforced columns to carry the load of 10 to 12 inch floors that can also support 250 pound per square foot. This kind of structural strength was essential to accommodate massive amounts of paper weight stored in the upper levels. With 14 foot ceilings, this structure can take in heavy, bulky equipment, like transformers and various production machinery. The site has 21 vertical shafts for ease of transferring loads from level to level. Currently, these shafts house the fiber riser and power cabling. With its own fiber vaults numbering to 4 and three electrical power feeds, that allows it to provide the whole facility with 100 megawatts of power.

  • The NSA-Bumblehive

The first Intelligence Community Comprehensive National Cyber-security Initiative, with the acronym of IC CNCI, has its first center at the Bumblehive known as the NSA. This is the largest standing one with over one million square feet. This gargantuan building complex includes many functional sub units: from chiller plants, water treatment facilities, fire pump house, and its own electric substation. On top of that, there is a vehicle inspection facility, along with a visitor control center. Because such a massive center requires a lot of power to keep it running, there is even a sixty diesel-fueled generators on standby for emergencies that are capable of supplying the facility for three days with a hundred percent back up capacity. This primary purpose of this data center is to cope with the increase and influx of digital data, which is the result of the leaps and bounds in global networking. The goal of the NSA data center is to keep track of all forms of communication. Such a massive and ambitious undertaking which include tracking contents of private emails, internet searches, phone calls, travel itineraries, parking receipts, and the like! It feels as nothing is too small and no data is to minute not to notice.

  • QTS Metro-Atlanta, GA

Standing at 970,000 square feet, the third biggest facility to land on the list is the QTS Atlanta Metro. With a data center foot print of 530,000, it indeed belongs to the list of gigantic data center. Because of its size, it has its own fully operational, on-site Georgia Power substation. On top of that, they have direct fiber access to multiple technological and communication carriers.

  • IO- Edison, NJ

One of the largest modular date centers in the United States, IO in Edison, New Jersey takes up 830,000 square feet. Noteworthy, this is a former printing press facility of the famous daily, the New York Times. This is also one of the few data centers that were able to run almost immediately because of its modular technology. Perhaps, the largest advantage and perk of this data center that sets it apart from the other is prime location. Its location next to a large power switching station also poses an advantage. IO New Jersey boasts of fiber optic connectivity from no less than two of the largest IP backbones around the globe.

  • Terremark Worldwide-Miami

In operation since 2001, this large data center located in bustling Miami occupies 750,000 square feet of total data center footprint. Given its location, this six story building was fortified to withstand hurricanes. They have served as a connection hub for Latin America as well as the South Easter United States. Converging at this building are 160 networks, which makes a thriving connectivity ecosystem for its unique target audience. On top of the roof’s facility are three large globes, which conveniently store two 16-meter satellite dishes, and one 14-meter dish. The latter was set-up to provide backup connectivity for its esteemed clientele should there be an off chance that the facility loses its fiber needs.

  • Microsoft- Chicago, IL

Everyone knows Microsoft and it is not surprising for anyone to read this large software giant, who was one of the pioneers of the industry, belonging to this list. With around 700,000 square feet of space, this center is not the cookie cutter sort. Its unique design makes it atypical when compared with other bustling data centers. Built to accommodate 40 foot storage containers is what’s in the first floor. But they aren’t really meant to store containers. Instead, they are packed with web servers. The upper level contains the tradition space for all the data needs.

  • IOS- Pheonix, AZ pheoneix az 

Nixing the coveted 7th spot of the list is another IO facility, but this time it is located in Phoenix, Arizona. With a compound occupying 538,000 square feet of space, they have the capability to house both the data center and the main headquarters in this one location. The main highlight is actually not the size of the data center, but its rooftop solar paneling. They are one of the primary companies to utilize this technology. Their solar powered system has the ability to supply 4.5 megawatts of energy for the facility.

  • Apple-Maiden, NC

This sits on 2183 acres of land that Apple purchased specifically for this purpose. Move over Google and Microsoft! The buzz around town says that they are looking to add another 75 acres, specifically the lots sitting across the road. What makes this site remarkable is its capacity to accommodate drastic expansions. A large private solar panel system was set-up to power this data center.

  • Microsoft- Quincy, WA

This is the second Microsoft center in the top 10 list. With 470,000 square feet, the Microsoft Quincy data center is the home of the equipment and technology powering one of Microsoft’s newest babies, the new Windows Azure. This is a cloud development platform that’s innovative and functional. This center has a lot of storage capacity and can hold 3.7 trillion photos.

  • DuPont Fabros-Ashburn, VA

Last but most definitely not the least is DuPont Fabros in the Loudon Metro Data Center located in Ashburn, Virginia. Spanning 416,209 square feet is only a drop in the bucket for DuPont Fabros, who already owns and operates at least a half dozen data centers in the area.

How Many 1U Servers will Fit in a Rack?

Just like warehouses store products, data centers are warehouses for digital information. They contain things like servers that are used to host websites. Businesses often outsource their web hosting, data storage, and information technology needs to data centers because it’s cheaper, data centers are more reliable than most in-house operations, and they’re often more secure than businesses’ in-house operations.

One of the most prevalent components of data centers is servers used to host websites. These servers are frequently stacked on top of one another to save room on large server racks that are dedicated to the exclusive storage of servers.

To answer the question of how many 1U servers will fit in a server rack, we first need to define what 1U means.

What is a standard rack unit?

Just like temperature and pressure are measured with degrees of Celcius or Fahrenheit and pascals, respectively, standard rack units are used to indicate how large servers are.

In the field of data center colocation, the sizes of servers are expressed in standard rack units, which are often abbreviated as U or RU, respectively.

Virtually all modern servers are the same width and length. The only thing different about their size is how tall they are. One unit or rack unit equates to a height of 1.75 inches, or 44.45 millimeters.

A server that is 1.75 inches tall is given a rack unit size of 1U. One that is seven inches tall, for example, is given a rack unit size of 4U.

How big are server racks?

Just how servers come in different sizes, racks come in different sizes, as well. Racks, which are almost always constructed entirely out of metal, are sturdy frames that hold individual servers.

The three main sizes of server racks of full racks, half racks, and quarter racks.

Full racks hold 42 rack units’ worth of servers stacked on top of one another. In other words, full racks hold about six feet of servers in terms of height.

Half racks hold anywhere between 18 rack units and 22 rack units, or roughly three feet of servers stacked on top of one another.

Quarter racks generally hold between 10 and 12 rack units’ worth of servers, which comes out to about 1.5 feet of servers.

Server racks also feature side-by-side columns to hold servers with. When shopping for racks, data center managers can simply refer to racks’ specifications to figure out whether they have multiple columns or just one.

Why are there different sizes of racks to hold servers?

Some facilities have very high ceilings. In such facilities, data centers are able to store dozens, if not hundreds, of servers stacked on top of one another. Other facilities have standard eight-foot or 10-foot ceilings, in which managers usually don’t house anything taller than single full racks.

Data centers need to know what size various racks are in order to plan out their server-hosting infrastructure. The aforementioned units of measurement exist to make such planning easier.