GRC & Minimus Servers Blog

Experiment: Temperature Set Point vs Server Fan Power

Posted by Alex McManis on June 17, 2016
There is a prevalent industry focus on increasing data center temperature to improve PUE and lower power usage, but when does it go too far? While it may be common knowledge that raising data center air temperature to above 70°F saves power, finding the sweet spot and exploring higher temperatures with free cooling takes some experimentation.

Even if you aren’t considering raising your data center set point, consider that inlet air temperature varies significantly from top to bottom of the rack. Airflow engineering attempts to even out airflow across the room, but a variation of 10°F is still not uncommon.

To find the effect of higher temperatures, GRC decided to measure the power of a server with varying input air – and you should, too. Using a power meter and thermocouple, we gradually increased the amount of recirculation to increase the input temperature into the server.

*Note: Raising the air set point is to save power on the chiller. We can’t test that affect because GRC data centers are chiller-less.

The following data is taken from a 2009 Dell R410, populated with a single, low power processor. Since the chassis is lightly loaded, the fans aren’t working strenuously. Fan power in other chassis is shown in Table 3 below.


Table 1: Dell R410 (2009) lightly loaded with a single processor, representing relatively low fan power
Ambient Temperature (ºC) Ambient Temperature (ºF) Power (Watts) Increase (∆ 27ºC)
27ºC 80.6ºF 137.7 -
29ºC 84.2ºF 139.4 +1%
31ºC 87.8ºF 141.5 +3%
33ºC 91.4ºF 144.0 +5%
33ºC 91.4ºF 144.1 +5%
35ºC 95.0ºF 148.4 +8%
37ºC 98.6ºF 154 +12%


Table 2: Distributed Computing Load and Air Temperature


We found the server power increased logarithmically with input temperature. We also measured the difference between servers in air versus servers submerged in GRC cooling equipment.

Table 3: Power Savings from Submerging in Oil
Brand Model Size Power Savings in Oil*
OpenCompute Winterfell 2U 9%
Dell 1950, gen III 1U 30%
Dell R410 1U 8%
Dell C6100c 2U, 8 CPU 17%
Supermicro 1026 1U 20%
IBM X3550 1U 8%
HP 585 G5 4U 18%
CoinTerra Miner 4U 29%
*Compared to 75-80ºF inlet air temperature

With your servers, how much power could you save by running the fans slower? How much if you submerged the servers and removed the fans altogether?

To learn more about Green Revolution Cooling's solutions call us at +1 512-692-8003 or contact us here.


Read More

Topics: fans, temperature, Servers, components, hardware, cost, cost optimization

Transformer Efficiency

Posted by Christiaan Best on June 8, 2016

Transformer Efficiency: Is it worth it?

PUE often encompasses all of the losses from the utility down to the server. This includes transformers that convert power down to 400 or 208, 3 phase power. To get an understanding of your "total" efficiency, transformer efficiency and costs have to first be understood.

Here are few things of interest to data center operators:
  • There are several factors that affect transformer inefficiency (electrical losses):
    • Hysteresis due to charging the field in the iron core (which is NOT proportional to load).
    • Line losses through the coil (which are proportional to load).
  • There are several different type of transformers:
    • "Standard temperature rise" – More efficient at partial loads, where data centers actually operate.
    • "Low temperature rise" – This means there is a low surface temperature (less heat coming off the transformer). These transformers have larger iron cores, which make them more efficient at transferring energy at high loads, but less efficient at partial loads, where data centers actually operate.
    • DOE 2016” – In the US, the Department of Energy released a new efficiency standard for all transformers sold. These transformers are more efficient across the board.
Below, Figure 1 shows a typical efficiency curve for a 225 kVA transformer from Square D. This is juxtaposed against older, standard temperature rise, and low temperature rise transformers.

Figure 1: Transformer efficiency curves for standard, low temperature, and DOE 2016 type transformers
Read More

Topics: data center efficiency, transformers, cost, efficiency, power

Server Cost Breakdown By Components

Posted by Christiaan Best on June 1, 2016
A lot of people are curious to know how server costs breakdown, component by component. The costs can vary hugely depending on the configuration. Discounts also apply when you buy components in larger volumes. We put together an example below for a typical air-cooled server bought in quantities of a few hundred servers. There are a couple of takeaways:
  • Different buyers get different pricing* based on volume, type of organization, application, relationship with sales rep, etc. Big companies can end up spending as much as 40% less than small businesses.
  • CPUs are a dominant cost and are rarely negotiable
  • Memory and storage pricing is relatively constant between vendors after discounts
*GRC does not practice price discrimination.


Server Cost Breakdown

Note: the prices in the below table have been sourced from popular electronics vendor websites, and are not necessarily indicative of Minimus pricing.

Component Quantity Price Estimate Extended Total
CPU: E5-2650v4 (12 CORE) 2 $1,116.00 $2,232.00
Memory: 16GB, DDR4-2400, ECC 8 $95.00 $760.00
Storage: Boot SSD, 120GB 1 $119.00 $119.00
Storage: 480GB, Medium Endurance SSD 2 $391.97 $783.94
Network Card: None 0 $0.00 $0.00
Chassis Costs      
Motherboard: Dual Socket E5-26xx v4, 8 Memory DIMMS, On-board Network: 2x RJ45 Gbe LAN Ports, 1x RJ45 IPMI LAN Port 1 $299.00 $299.00
CPU Heat Sink 2 $14.00 $28.00
Power Supply 1 $95.00 $95.00
Storage Backplane 1 $75.00 $75.00
Drive Caddies 4 $13.00 $52.00
Fans 5 $10.00 $50.00
Internal Cables 1 $20.00 $20.00
Riser Cards 1 $19.30 $19.30
Sheet Metal Case 1 $100.00 $100.00
Assembly Labor and Test 1 $150.00 $150.00
10% Markup 1 $478.32 $478.32
    Total Cost $5,261.56



About GRC and Minimus Servers

Green Revolution Cooling has introduced Minimus Servers in partnership with hardware giants Supermicro and Gigabyte. The Minimus platform offers reliable, purpose-built servers for a fraction of the cost of OEM servers. The core of Minimus’s price-effectiveness is that customers get exactly the configuration they need, but without the added costs of extraneous features and brand-related premiums that are normally tacked onto OEM offerings.

To learn more about Minimus Servers contact us at [email protected] or call +1 512-692-8003.


Read More

Topics: Servers, components, hardware, cost, cost optimization

Introducing Minimus Servers

Posted by Matt Solomon on April 14, 2016
Last week, we introduced Minimus Servers to the market. At its core, Minimus is two things - less expensive, and a novelly simple method for using liquid cooling in your data center. It's an integrated package made up of purpose-built servers, standalone rack-based cooling, and power distribution.

Designed in partnership with high-quality OEM manufacturers like Supermicro and Gigabyte, the Minimus Server is a reliable, low-cost server that offers savings of 50% to 60% compared to Dell.com.

Why Minimus? Why now?

For years, Green Revolution Cooling has been helping customers design and build their own custom hardware so that they are able to take full advantage of their immersion cooling technology. Since GRC has deployed thousands upon thousands of these servers, paired with the fact that they boast a less than 1% failure rate, we're making the Minimus architecture widely available to the entire market. In conjunction with immersion cooling, Minimus Servers make data centers more efficient, more cost-effective, and easier to deploy - simple as that.

How do Minimus Servers cost a fraction of average OEM prices? Take a look at the image below.
GRC_Minimus-Photo_Parts-e_Waste_430px.png
Don't pay for unnecessary components with fully customizable Minimus Servers

How to save 50% to 60% off over Dell.com

The Minimus offers you the ability to pick and choose the parts and features you need for your application like processors, memory, etc., while eliminating all the extraneous parts and features such as fans, intricate chassis, architecture for redundant power supplies, hard drive caddies, etc.

The result is a lean, low-cost server that is custom-built for your application. 

High Reliability

Thousands of Minimus-designed servers have already been deployed in data centers around the world and are performing efficiently. In addition, they have proven to be some of the most reliable servers in application. This high reliability comes from the use of premium components from quality manufacturers like Supermicro, Gigabyte, Intel, and others, used in conjunction with naturally protective immersion cooling environments. All of these factors are what contribute to the Minimus’s unmatched reliability.
Read More

All You Need to Know About... Sustainability

Posted by Matt Solomon on February 9, 2016

The new metrics for data center efficiency and sustainability

While long considered the gold standard for measuring data center efficiency, PUE is not without its faults. For one, it only takes into account the efficiency of power usage (i.e. the ratio of total facility power to IT power). It can be a good measure of cooling system and facility efficiency, however, is does not take into account a holistic perspective of the environmental impact of a data center. To truly understand a data center’s impact on a broader environmental level, it is critical to measure the water usage and carbon emissions for which that data center may be responsible – both directly and indirectly.

As always, our All You Need to Know About... series just scratches the surface of a different subject for each entry, and to dive deep into all the data and specifics you'll want to download our corresponding White Paper "A Paradigm Shift in Data Center Sustainability".
Click here to Download Sustainability White Paper
Read the full White Paper on Data Center Sustainability

The Green Grid defines the following metrics for measuring the efficiency of a data center in terms of its water use and carbon emissions:

Water:
  • WUEsite = Annual Site Water Usage / IT Equipment Energy (in L/kWh)
  • WUEsource = Annual Energy Source Water Usage + Annual Site Water Usage / IT Equipment Energy (in L/kWh)
Carbon:
  • CUE = Total CO2 Emissions Caused by the Total Data Center Energy / IT Equipment Energy (kgCO2eq/ kWh)
A Paradigm Shift in Data Center Sustainability White Paper
Download A Paradigm Shift in Data Center Sustainability White Paper
Learn more about the metrics and how oil immersion cooling from GRC can help reduce your carbon and water footprint by up to 95%.

Have questions? Want to see how the CarnotJet system can cut your data center energy and costs? Contact us via a email at [email protected] or by phone at +1(512) 692-8003.
Read More

All You Need to Know About... ElectroSafe

Posted by Dhruv Varma on January 13, 2016
INTRODUCTION:

Today's post continues our series All You Need to Know About... with a "deep dive" into the magical liquid that makes it all possible – ElectroSafe. We'll look at all aspects of our coolant from fire safety to what makes it the perfect liquid for immersion cooling. For further reading, download the ElectroSafe Coolant Fact Sheet and FAQs for a brief one-page overview with information such as the NFPA fire diamond and coolant characteristics such as evaporation rate and density, then give our White Paper on the subject, Submerged Servers in the Data Center a read to discover how safe ElectroSafe really is.


What is ElectroSafe?

The ElectroSafe coolant, used in the CarnotJet liquid cooling system, is a clear, odorless, non-toxic blend of dielectric mineral oils. 
Read More

All You Need to Know About... Storage Drives in Immersion Cooling

Posted by Matt Solomon on November 30, 2015
INTRODUCTION:

In this edition of the All You Need to Know About... series, we talk about one of the most commonly asked questions regarding liquid immersion cooling: what kind of drives can be immersed, as well as how standard spinning disks can be used in the CarnotJet system.

Drives that can be immersed


Solid State Drives (SSDs)

As the name suggests, SSDs are solid state devices with no moving parts. Hence can be submerged in ElectroSafe right out of the box just like other solid state components such as memory DIMMs, mother boards, etc. With the steadily declining price and superior speed and performance of SSDs, they are quickly becoming the storage device of choice.

Helium-filled Drives

Helium-filled drives, such as the Ultrastar He6 offered by Western Digital’s HGST, are hermetically sealed and can be directly submerged in ElectroSafe.

How unsealed spinning disks can be used in the CarnotJet system


While unsealed HDDs cannot be submerged out of the box like SSDs and hermetically sealed drives, they can be adapted to be used with immersed servers by being kept above the surface of the oil. This is done through the use of a modified drive caddy designed to fit in the existing drive caddy slot – as seen in the photo on the left.


CONCLUSION:

To learn more about the performance and reliability of the CarnotJet system click here, then get in touch via email at [email protected] or phone at +1 (512) 692-8003.
Read More

All You Need to Know About... Floor Space Optimization

Posted by Dhruv Varma on November 25, 2015
INTRODUCTION:

Whether in planning a retrofit or a greenfield build, floor space utilization in the data center is an important consideration. There are a number of ways floor space utilization is measured for data centers – Watts per square foot, servers per square foot, square feet per rack, etc. Below shows the ways in which GRC's CarnotJet system is able to save valuable floor space in the data center as compared to traditional architectures.

Get More Compute per Square Foot with GRC


Less Infrastructure

The CarnotJet system not only eliminates the need for traditional CRACs, CRAHs, and perimeter cooling, it also helps downsize the power and backup infrastructure proportional to the reduction in peak power.

Denser Layout: No hot and cold aisles

CarnotJet racks are installed back-to-back and end-to-end next to each other. There is no space wasted in creating hot and cold aisles – as compared to traditional air-cooled environments – thereby eliminating each alternate aisle and freeing up space.

Denser Racks: Support more than 100 kW per rack

The ElectroSafe coolant used in the CarnotJet system has 1,200 times the heat capacity of air (by volume). This, along with the superior thermal conductivity and closed loop thermal management, allows for extremely high IT load densities in the racks. In fact, some high performance customers have been able to pack more than 100 kW of IT load in a single rack.

Further Reading


This post only scratches the surface of how efficient, powerful, dense and cost-effective GRC's CarnotJet system is. To read more in-depth explanations and see sample layouts, download our white paper about floor space optimization.


CONCLUSION:

To learn more about Green Revolution Cooling's CarnotJet system, please feel free to get in touch via email at [email protected] or by phone at +1 (512) 692-8003.
Read More

Data Center Liquid Cooling Myths - Busted!

Posted by Matt Solomon on October 19, 2015

INTRODUCTION:

You may have heard a few exemplary facts about liquid immersion cooling - like our CarnotJet system - such as its  incredible CAPEX and OPEX savings. However, you also may have heard some things that made you unsure about this emerging technology. Today we dispel some of the myths about liquid immersion cooling, and set things straight with the facts about this revolutionary technology.
Read More

Top 5 Reasons: Why It's Time to Start Submerging Data Center Servers In Oil

Posted by Dhruv Varma on October 9, 2015

INTRODUCTION:

Oil immersion cooling from Green Revolution Cooling has been on the market for over five years now. Over the years we’ve seen adoption of immersion liquid cooling expand to a number of industries, applications, and geographies. Here’s why we think it's time to take a hard look at what immersion cooling could mean for your data center:

Read More