BRINGING FACILITY AND IT TOGETHER

Toll Free: 877.781.5090 • Phone: 630.701.9310

Data Center Trends

When Disaster Strikes: Disaster Recovery Revisited

As we all know, people across the world this month have been facing tremendous disaster and damage to their property and valuables. While we don’t believe in turning disaster into profit, we do believe in reinforcing the education of preparedness, asking the questions: “What would I do if that happened to me? Would I have a plan in place? How would I get back on track after it was over?”

stressed poseUnfortunately, disasters can happen at any time in any place...and in many different ways: Storms, floods, electrical explosions, fires, earthquakes, infestations, etc. This includes to the data center. One of the most pivotal aspects of modern business and communication can be swept away in an instant. So, what will you do if you lose your data center to a disaster?

Over two years ago we did a short article on Disaster Recovery and Business Continuity Plans – defining them and providing examples of real-world use. Yet since then, we have written dozens of pieces on cooling, power, telecom, service, and more. And I would imagine that managing your data center is very similar. You may start out thinking about a disaster recovery plan, but soon your focus needs to shift to improving the cooling, then quickly after you must upgrade your voice & data package, and before you know it the DR plan is so far down the list you never know if you’ll get there.

Here’s your free pass: Move Disaster Recovery up the list and get a plan in place. Let’s start.

 

WHY?

This may seem like a self-explanatory question, but sometimes if we don’t spell out the gravity of certain situations, we may not take them as seriously as we should. Here we go: Your data center encounters a massive disaster and you lose all capabilities without a DR site. Every minute that you are unable to get your site up and running again, you are losing thousands of dollars – an average of $9,000 per minute based on a recent study averaging losses from multiple different business sectors. Across your demographic, customers unable to access your services are losing faith in your business. At this point, only 44% of businesses successfully recover data lost in the event, and unfortunately a whopping 90% of businesses with a huge data loss and no DR plan are forced to shut down within two years.* A Disaster Recovery Plan is not a decoration, it is a necessity, no matter your business size or type.

 

WHERE?

The most effective location to host your DR equipment is far enough away from your in-house data center to avoid the same potential disasters. The last thing you want is for a disaster to strike your data center AND your DR site. Making sure they are on separate power grids and even located in different environment types (large open plains vs. urban sprawl) would be ideal. However, it is also important that they are close enough to communicate quickly and effectively. Having your home data center in Chicago and a DR site in England probably will not offer as quick of a failover as a site in Minnesota, for example.

 

WHEN?

As urgent as it feels now to make sure you have a DR plan, you must not forget to actually “plan” it before signing dotted lines. Unless you have the means to replicate your data center in its entirety with a secondary staff to manage it, you will first have to categorize and prioritize your data. Determining which aspects of the business related to the data center that you will need back online first will help with recovery speed and cost defrayment. Categorizing and prioritizing is the first of many steps to get your DR plan in place, but doing so will help the rest of plan fall much more easily into place.

 

WHO?

One major component of disaster recovery that is often overlooked is the personnel. A DR plan is not just for one person to know and manage; the most effective plans have everyone involved and assigns different responsibilities to each employee to ensure a quick and efficient recovery. With prepared personnel each attacking the situation from different angles, and other non-critical employees aware of the situation and how to manage their own aspects of the business and recovery, your business will have the best possible opportunity to get back on track and remain sturdy upon recovery.

 

HOW?

You are already on the right path by reading this article. The first step to solving any problem is to simply say that you are going to solve it. Now you can give Facility IT Group a call and let us guide you through the next steps of the process. We are here and ready to help.

 

*Based on a white paper from Storagecraft.

Dust Off Your Flywheels!

Have you ever heard of the flywheel UPS? If you haven’t, you shouldn’t feel out of the loop. Flywheel UPS systems are in many eyes a “dated” technology. However recent advancements and further implementation and study has shown that maybe...just maybe...they are more effective and efficient than battery-powered UPSs.

Below we have compiled a table of items to help quickly compare a Magnetic Flywheel UPS to a Battery-Powered UPS. Underneath the table you can find a more in-depth description of some of the topics.

 

Magnetic Flywheel UPS

Battery-Powered UPS

Type of Energy

Kinetic

Chemical reaction

Frequency of Maintenance

Annual check up

Semi-annual check up

Sizing & Placement

Fixed

Flexible

Unit Lifespan

Indefinite

Battery replacement every 5-7 years

Environmental Impact

None

Hazmat disposal, contains toxic chemicals

Initial Costs

↑ 35%

--

Lifetime Costs

--

↑ 60%

Storage Limitations

Quality and thickness of containment vessel and cylinder determines energy output; must be protected against shrapnel explosion

Strict temperature control and physical handling, higher maintenance cost; must be protected against chemical leaks and erosion

Electrical Efficiency

Up to 97%

Up to 92-95%

Runtime

15-30 seconds

15-30 minutes

 

Type of Energy

What is a flywheel UPS? Your standard UPS system runs on batteries which use a chemical reaction to transfer energy into the unit and give it electrical power. Meanwhile, a flywheel system uses rotating cylinders inside a vacuum chamber to produce kinetic energy, which is then transferred into electricity to power the unit when needed.

 

Sizing & Placement

A flywheel UPS will also take up significantly less space than a battery-powered UPS. Batteries take up a hefty chunk of space, and to include them inside the UPS cabinet not only makes it bulky, but also considerably heavier. On the other hand, flywheel systems, due to the nature of the rotating cylinders, need a fixed amount of cubic feet for installation. Battery cabinet layouts are a bit more flexible in terms of physical placement.

 

Unit Lifespan

Flywheels have been considered dated because they have typically used mechanical bearings to spin the cylinders. These bearings need regular maintenance check-ups and replacements, and depending on the physical set up, could lose anywhere from 20-50% of their energy due to friction created from changes in the earth’s rotation. Therefore, batteries quickly usurped the spot of dominant power source for UPS systems due to their high energy retention and better reliability.

In recent years, flywheel technology has improved dramatically. The greatest advancement has been switching from mechanical bearings to magnetic bearings. Where batteries need regular maintenance and have limited lifespans before replacement, a new magnetic-bearing flywheel in a properly sealed vacuum chamber will require virtually no maintenance or replacement - barring direct physical damage to the equipment.

 

Initial Costs // Lifetime Costs

Battery-powered UPS units are less expensive at initial purchase than the magnetic flywheel UPS. However, it is primarily in maintenance where the flywheel outshines the batteries. It is recommended that battery-powered systems be serviced semi-annually, and batteries must be replaced every 5-7 years even when working at their most efficient usage. Also, the room control is much more strict for battery systems, adding to costs for maintaining cooling and reducing humidity.

The annual service for a magnetic flywheel UPS is mostly a formality, as a properly maintained system will continue to run indefinitely without replacements, and the breadth of external factors is far slimmer. A full lifespan between the two systems ends with the flywheel system costing about 50% less total than the battery system.

 

Storage Limitations

Flywheels place a great amount of pressure on the rotating cylinders, especially at higher speeds. It is extremely important that the RPM of the flywheel does not exceed the force limitation of the cylinder material. If a thinner material is used and the force becomes too great, it is possible for elements to shatter and explode, sending shrapnel out at the speed of a bullet. In order to provide more energy, the flywheel system will not only need a thicker and stronger cylinder, but also a thicker and stronger encasement to protect against any possible shattering. In this case, many designers build the flywheel into the ground of the data center to prevent any expulsion of shrapnel.

Batteries on the other hand have their own limitations. If a battery is exposed to higher temperatures than it can handle, it is possible for the internal hazardous chemicals to leak and damage the equipment, including other batteries. It is also possible for the batteries to explode and cause serious damage to surrounding equipment and/or harm to personnel. Further, batteries can still fail even without physical proof. If your data center goes down, you may assume that your batteries will take over until generator power kicks in. But with faulty batteries that never showed signs of failure, that kick may never happen and suddenly your entire infrastructure is out of commission.

 

Electrical Efficiencies

It is important to note that these numbers fluctuate pretty greatly depending on load and use. Many battery-powered UPS units have something called an "Eco Mode" where it can increase efficiencies to flywheel levels, and conversely flywheel UPSs can lose efficiencies when the load is higher and the RPMs of the rotating cylinders are increased. In standard modes at an average load, flywheels can maintain a 97% efficiency and a battery-powered UPS about 92-95%. Depending on the size of the average load, that difference can range anywhere from indifferent to pretty significant.

 

Runtime

Perhaps the flywheel’s greatest downfall. A battery UPS will confidently hold your power until the generator takes over. Generators on average take approximately 10-20 seconds to take over when the regular power has failed. Flywheel systems cut it close, only holding onto the power for a few seconds longer than required. This may play a huge factor, especially if you cannot afford any downtime whatsoever. Despite all of the benefits above, ensuring your uptime often comes at the top of the list of needs, and any lack of confidence that your generator power will switch over quick enough could take a flywheel off the table entirely.

 

 ---

Flywheel systems are used in a very small number of data centers across the world. Some may say because the technology is dated, others because of the short run time, and some may just be waiting for more research to be done. In any case, FIG can help you make a decision for your next UPS system. If you have more questions or want more information, call us today!

You Need To Clean Up

One of the most overlooked parts of a smooth and efficient data center is equally one of the easiest to see: cleanliness. It is amazing how keeping your server room clean and properly maintained can dramatically affect worker performance and infrastructure efficiency. Psychological studies have shown time and time again that when a person’s office desk is freshly cleaned and organized, they almost always perform their regular job tasks quicker and with more precision, and often show more energy than when desks are dirty and/or cluttered. The same idea is present within mechanical and computerized systems. Dirt and build up block airflow and slow down performance, and a dirty space will make the data center managers work less efficiently and with less sense of timeliness.

Data Clean is an organization of critical environment cleaning specialists who maintain, clean, and remediate data centers. Since 1979, they have worked tirelessly to build their company into a global effort and have logged countless hours cleaning data centers, laboratories, network facilities, telecom outposts, and more.

Here is a list of 9 Reasons to Have Regular Cleanings in Your Data Center (with help from Randall Miller of Data Clean):

 

Prevent Overheating

Over time, dust and other contaminants build up, especially around open areas such as internal fans. Without regular cleaning, that build-up will hinder airflow and cause internal heat to rise until the system eventually crashes. Keep your fans and external openings clean!

 

Reduce Risk of Fire

Similar to overheating, if open passages continue to become blocked and affects multiple systems, the internal heat of servers and other critical infrastructure can cause combustion and result in a fire. Cleaning will keep your equipment cool.

 

Prevent Static Electricity

A benefit of regular cleaning is that it will nullify static accumulation. The longer static electricity is allowed to grow, the more powerful it will become, and can cause unnecessary downtime or even damage to your infrastructure.

 

Improve Equipment Reliability

When things are dirty or unkempt, they simply will not work as well as when they were clean and pristine. Servers and data center equipment are no exception. Alongside the threats above, servers will work slower and connections will have a harder time sending signals – everything you would not want your server room to do.

 

Protect Your Investment

Let’s be honest: Designing, building, powering, cooling, connecting, maintaining, and staffing a server room can be quite expensive. And even cost put aside, a data center plays an incredibly pivotal role in a business’ daily operations. Cleaning your critical space regularly ensures that the time and money invested is not in vain, and a regular cleaning costs less than 1% of the room’s total net worth!

 

Preserve the Integrity of Your Data

The software and vital information stored within your data center is the lifeblood of your business. Even with cloud backup and/or a disaster recovery site, the last thing you want is for the data stored within your space to become compromised. Without regular cleaning, it unfortunately is like wishing for it to happen. Keep your data safe, keep your room clean.

 

Increase Productivity

Have you ever lost an entire afternoon looking everywhere for an item you needed but couldn’t find it within the clutter? Or maybe not even bothered with a favorite shirt because it was in the dirty hamper? This same mentality happens subconsciously within the workplace. The more cluttered and dirty a space is, the less likely a staff member will want to work with it – or the more likely time will be wasted trying to work around the clutter. Maintaining a clean space will inspire productivity, efficiency, and creativity instead of stifle them.

 

Satisfy Insurance Audits

Insurance companies do not want to see unclean areas that could potentially cost them a lot of money if something were to happen such as a fire. A clean room means a happy insurance auditor, which will prevent your company from having any issues on your behalf.

 

Win Confidence

It has been said that appearance is a direct reflection of how well an organization is run. If you oversee the data center space, imagine how impressed colleagues and superiors will be to see how well-kept the room is with Data Clean. Confidence in the cleanliness of that room is confidence in you to maintain it. Be the champion of your data center and keep it clean!

 

Cleanliness is the hidden key to a successfully run and efficiently working data center. Facility IT Group has partnered with Data Clean to provide these services directly to you in a safe, quick, and cost-efficient manner. Call today for a free consultation!

Save Money; Use Ebullient Liquid Cooling

Cooling in the data center is and always has been a rapidly evolving science. Combinations of perimeter units, ceiling and floor units, row units, spot coolers, and even in-rack systems work diligently to remove heat from the server room and protect your critical physical infrastructure. Systems like hot and cold aisle containment also aid in this crucial component of the data center space. As long as we are running servers, their heat will need to be removed to prevent any issues with regular operation.

Did you know that around 40% of your electrical costs are coming straight from your cooling systems?

And for such a data center necessity, this certainly seems understandable. But in a world where we are always looking to save on costs and space, cooling never seems to be an option for reduction. You need your servers for business operation, you need the cooling to protect them, you need battery backup for uptime guarantee, you need connectivity to interact with your customers and workforce.

Maybe you might take a risk and scale back security...but what if a breach occurs? Perhaps you decide to forego your service contract...now you’re in a real tight spot when the UPS goes down.

What if there was a way to adjust your cooling system – size and output – without fear of losing any capabilities and in a manner which reduces all monthly costs?

There is: Liquid cooling.

liquidcooling

If you read the line above and instantly became nervous, you’re not alone. Liquid cooling holds somewhat of a taboo. We are told from our childhood to keep our liquids away from our technology, and so it can be difficult to accept an option that counters that thought.

Ebullient is an innovative frontrunner in the field of liquid cooling. Their DirectJet™ Technology was created based off our long-standing fears of mixing liquid and technology. Rather than using water, Ebullient opts for an engineered liquid designed specifically for use as a heat-transfer fluid. Crafted by 3M, this fluid is non-corrosive, non-flammable, and will not damage electronic equipment in the event of leakage or other failures.

Ebullient’s liquid cooling system can cool any server from any make or model while reducing cooling costs up to 75%.

ebullientjetloops

Take a look at the DirectJet™ Flyer and learn more about the cutting-edge system by clicking here.

We are sure you have more questions. Let’s set up a call or a meeting to learn more about this technology and see if it’s the right fit for you.

Arc Flash: Fact Sheet for the Rising Data Center Threat

Special Thanks to Doug Christianson, EMT Corp., for many contributions to this fact sheet.

 

At one point in the not-too-distant past, 100kVA was considered a LOT of power for a data center. In today’s measurements, a 1MW UPS system with 300kVA PDUs supplied by 480-600V feeders is considered commonplace. Even in our homes, electrical service has grown from 60 amps to over 200 amps in just the past few years.

With growing power requirements comes growing disaster possibilities.

An arc flash incident has always been one of those possibilities, but especially recently with the power demand growth and the constant need for bigger and more sustainable equipment, arc flashes have grown more common and more deadly.

Some arc flash facts:

  • 5-10 arc flash explosions occur daily
  • Over 2000 people are treated annually for arc flash burns and damage
  • The heat discharge from an arc flash can reach 35,000°F, almost 4x the temperature of the surface of the sun
  • Generally, electrical systems need to operate at 480V or higher for an arc flash to occur

cable sparkWhat is an arc flash?

In simple terms, an arc flash is an undesired electrical short circuit from one conductor to another or to a ground.

Electrical faults occur fairly often – a breaker or fuse may have tripped, the power can be interrupted. However, on occasion, prior to tripping, an arc flash event occurs.  Arc flashes specifically involve: melted metal, a high temperature, fast speed, and explosive discharge.

What causes an arc flash event?

The three main causes of almost any data center issue, including arc flash, are: Human error, inadequate PM, or improperly applied equipment. Arc flashes also can occur via the following:

  • Accidental contact from worker
  • Improperly applied or installed equipment
  • Contamination across insulated surfaces
  • Deterioration or corrosion of equipment
  • Dust or condensation build up

Approximately two-thirds of all electrical injuries are due to the fault of the worker.

What happens during an arc flash?

An arc flash typically begins with a short circuit path between energized phases or phase to ground.  Since the load is removed from the path, extremely high fault currents cause the conductor metal to melt and convert to a gas.  During the phase change from solid to gas, the solid material (copper or aluminum) expands roughly 64,000 times original volume causing the arc flash explosion which consists of the following items:

The electric arc is enhanced by a massive blast of plasma and metal with extreme force and explosive damage.

The pressure from the arc blast can be detrimental enough to blow out eardrums and damage internal organs, as well as dispel shrapnel at extreme velocity.

The flash itself can be so bright as to cause temporary or even permanent blindness.

The blast can affect equipment all throughout the room, either by direct physical contact or through internal power damage downstream. In some cases, it can also be fatal to people in the room.

man stuck in wiresHow to prevent an arc flash:

  1. First and foremost, data center designs should be thoroughly studied with qualified facilities electricians.
    1. Which panels, switchgear, UPSs, ATSs, etc. need to be shut down for maintenance and repair?
    2. Which computing systems might be affected?
  2. Complete an in-depth arc flash analysis.
    1. A comprehensive short-circuit study should be conducted for the data center space.
    2. A breaker coordination study must be done, even if one has been done before.
      1. Ensures that the breaker nearest to the fault trips before any unintended current can pass through.
  3. Do not allow electricians to work on "live" systems.
    1. Power may have to be shut down for regular maintenance or observation, or even just to open the panel door.
  4. Ensure that all personnel in contact with equipment are qualified and prepared to assess and handle different equipment with varying degrees of arc flash hazard levels.
    1. Proper gear and tools should be used at all times and regulations should be followed accurately.

How Redundant Can You Be?

Redundancy. It’s a term we hear less as a suggestion and more as a necessity for the data center space. Redundant data centers are less likely to have unexpected downtime, single points of failure, detrimental mechanical issues, or even problematic user error. But just how redundant should you be? Is there a golden standard? Instead of observing this question as how redundant you should be, ask yourself how redundant you can be.

 

Take a step back: What exactly is redundancy?

grillingburgers-smallImagine you are grilling burgers for a barbecue. You’ve totaled up your RSVPs and determined that 10 people would be attending. This is great! You know how many burgers to buy, how many to toss on the grill, and how many people will eat a burger. If everything goes according to plan, you will be able to spend the exact amount of time, money, and energy on 10 burgers with no portions lost.

If everything goes according to plan, of course.

Two very simple issues can happen here:

  1. One of the burgers slips off the plate and plunges to the grass and dirt under the grill. You now only have 9 burgers but need to feed 10 people.
  2. Someone who initially thought they couldn’t make it unexpectedly shows up due to a last minute schedule change. You now only have 10 burgers, but need to feed 11 people.

In the simplest of definitions, redundancy is preparing ahead for these issues by having more burgers than you need. In data center terms, this is like having more UPS units or more cooling units than you technically need in case of unforeseen issues.

Redundancy comes at a price, however. The more units you have to cover these potential problems equals more money to purchase, more energy used to maintain, higher output costs and measurements in electricity, etc.

 

N, N+1, and 2N: What do they mean?

To some, defining these calculations might seem novice, however it can be very easy to misunderstand what these numbers mean – sometimes we become so accustomed to a certain method that we forget to explore other options.

redundancy-riskLet’s take a look at N. N represents the exact number of units that is sufficient to accomplish your goal, such as our 10 burgers for 10 people or the number of UPSs to cover the data center load.

Is N enough to sustain your data center? Technically speaking, yes. But without any redundancy, you are taking a huge risk and betting all your money on one horse. If anything happens with N, you are looking at downtime, extensive money loss, and incalculable stress.

N+1 was devised to protect against these issues – having one more unit than required in case of issues. If you had 11 burgers, you would easily be able to feed 10 people if one burger fell on the ground, or feed 11 people for the unexpected guest. Likewise, having one more UPS or cooling unit in place will protect your data center in case one of the other units goes down.

This isn’t just limited to +1 either. Some organizations use more units to cover more issues, such as N+2, +3, +4, etc. depending on the size and requirements of the room. The more burgers you have, the more able you will be to resolve issues of dropped patties or additional guests.

The “N-plus” factor is very useful if you feel extremely confident that you can predict these potential issues. But sometimes, problems occur outside of our control. What happens if you have 4 extra guests to you BBQ, but accidentally drop an entire plate of 5 burgers? Even with an N+4 system, you are still unable to feed 5 guests.

multiplecomputers-small

2N is the calculation that strives to cover all bases. Rather than adding individual units in case of issues, instead you double the required number on a completely separate power stream to ensure that you can bounce back from even a total loss.

If 10 people RSVP to your barbecue, this means you are grilling 20 burgers on 2 separate grills. Now you can drop a plate AND have unexpected guests and easily feed everyone. You can even accidentally knock over an entire grill’s worth of burgers while tossing the football around, and your other grill will run unaffected and pick up the slack. With a 2N set up, if N goes down entirely, the redundant N will remain unaffected.

In comparison, an N+1 system is usually running from the same power source and so if the entire system goes down, even the redundant parts might go with it.

And even still, if you are extremely worried about downtime and have the funding and manpower to maintain it, some organizations even go to a 2N+1 system – the ultimate in uptime stability. It goes without saying that this system, however, is absolutely a costly one and is most effective with industries such as banking, power, and government where uptime availability can be the difference between a thriving business and complete professional destruction.

 

Comparing and Contrasting Redundancy

  N N+1 2N
Pros

• Covers your equipment exactly

• No additional resources needed

• Lowest cost, energy output, and maintenance

• Additional units support extra load

• Protects in case of individual unit downtime

• Return on investment is highest in both cost and maintenance

• Full redundancy = highest protection against downtime

• Runs on separate systems (concurrently maintainable), can protect against major outage

Cons

• No redundancy

• Must have very exact representation of data center use/loss at all times

• High downtime risk

• High retroactive costs in case of downtime

• Susceptible to single point of failure still

• Usually only will cover individual unit downtime, not full system

• Costly and possibly energy inefficient

• Most amount of time, energy, manpower to maintain

• Return on investment is hard to see without a serious downtime threat

 

Why would someone choose an N-only system?

It’s difficult to generalize into one or two reasons why all N-only organizations have chosen to remain that way. Perhaps it is a cost issue; it is certainly the most inexpensive route in the short-term to go N-only. Sometimes that is coupled with the thought that said organization is not profit-dependent on 100% uptime and availability. A smaller shop with only a server closet, no online store, and one POS system might not see N+1 as a logical option.

 

In a broad sense, the more that uptime/availability influences your company profit and brand loyalty, the more viable an N+1 or 2N system will become to you. But there are so many factors that also affect this decision, and so it is important to discuss your options with a consultant who has extensive experience in the field like Facility IT Group. Let us know if we can help.

PUE is O-U-T

ASHRAE logoIn case you were not aware, ASHRAE (the American Society of Heating, Refrigerating, and Air-Conditioning Engineers) just released their new ASHRAE 90.4 Energy Standard for Data Centers publication. In this publication, many changes have been made in regards to the standards to which data centers have been upheld over the past ten years, including the removal of Power Usage Effectiveness (PUE) from their standard calculations.

ASHRAE has been a global leader in determining and explaining the operating standards for data centers and other high-powered facilities for many decades. In 1959, the American Society of Heating and Air-Conditioning Engineers merged with the American Society of Refrigerating Engineers and ASHRAE was born, extending into three generations of designing proper data center standards upon which operators are generally required to follow.

It should be noted that ASHRAE’s publications are not actually requirements but rather suggestions. However, it is regular practice for building officials to reference and adhere to ASHRAE standards when determining code evaluations and permitting, as well as inspections.

So what is different with this 90.4 publication? Let’s take a look at a few key changes.

 

Standards are no longer considered “prescriptive-based”, but instead more “performance-based”.

celebration-smallFor the past ten years, it has been accepted that data centers were required to meet certain calculations of energy efficiency and wasted power. Due to this, equipment designers and data center providers have been dedicated to reducing their waste and delivering the highest quality of efficiencies available.

This progress meant it was no longer about trying to achieve the standard, but instead about trying to achieve perfection. And with some of the “greenest” equipment seen in our time, ASHRAE decided that it is not quite as important to standardize the equipment as much as it is to standardize efficiency of the equipment.

For example, economizers have been pretty much required in order to meet the energy standards set in the 90.1 publication. Even if the economizers were unreliable or impractical to use, many felt obligated to install them to remain within the standard. Economizers have been especially troublesome for data centers in warmer climates where they didn’t see the return as well as colder-located facilities.

The new 90.4 standards allow data center operators to use other means of ensuring efficiency without mandating the equipment they use, so long as they meet the output calculation standard.

 

Upgrade projects now respect individual components rather than forcing unrealistic updates.

man replacing serverWith a single-calculation system, this change can be difficult to understand. ASHRAE realized that in order to follow 90.1 standards with a major upgrade to a data center, many times it forced operators to upgrade other major components when it might not have been a realistic business decision to do so. But in order to comply with 90.1, data centers were stuck between a rock and a hard place.

The new 90.4 allows data centers to upgrade a component of the data center while allowing a tradeoff between mechanical and electrical infrastructure. For example: If you need to replace your UPS system, but doing so will mean that your cooling system will no longer fall under the cooling energy standards, you won’t be forced to upgrade your entire cooling system.

Instead you can purchase a highly efficient UPS and smaller components to help distribute power more efficiently (such as transformers). As long as the new electrical equipment works together with the existing mechanical cooling equipment to fall under the more general performance efficiency standards of 90.4, you will not be forced to go through the lengthy and unnecessary process of upgrading the cooling.

 

PUE has been removed as a standard calculation and is replaced with MLC and ELC calculations.

PUE meterPower Usage Effectiveness standards have been wonderful for the data center community. It has forced operators to think logically and intentionally about the energy efficiency of their equipment and the amount of power they waste daily. Without PUE’s implementation in today’s data center world compared to when it was first introduced in 2006, the amount of wasted power and wasted money spent on energy would be astronomical.

However, in recent years with the amount of change in data center equipment and the constant need for upgrades, it has not made as much sense to use PUE as a standalone standard. PUE requires actual in-use operating data, which is very difficult to measure accurately. Issues with the power delivery company can cause PUE numbers to rise, even if the equipment is not losing much wasted power.

Also, server utilization plays an important role in how PUE is measured. For example, a company with high PUE and heavy server usage might actually be more efficient than one with low PUE and low usage. Plus, PUE is easily abused in favor of marketing, which can also make it very difficult to judge accurately.

To solve these problems, ASHRAE has removed PUE in favor of Mechanical Load Component (MLC) and Electrical Loss Component (ELC) calculations.

 

            Mechanical Load Component (MLC): This is a formula used to determine the efficiency factor which represents the amount of mechanical power needed for design data center load. This calculation standard also varies depending on the climate, making it easier to address the concerns of different physical locations.

            Electrical Loss Component (ELC): This is a calculation to determine the losses experienced through different segments of the electrical system. Each segment of the system is upheld to different loss characteristics (such as a single-phase vs. a 3-phase UPS), adjusting for varied sizes and configurations.

mlc + elc

 

Rather than try to force these two calculations together, ASHRAE keeps them separate and holds them to their own qualifications. In doing so, it prevents businesses from being obligated to make certainly costly and unnecessary upgrades to keep up with PUE standards. This allows these facilities to still strive for the most energy efficient designs as possible, while also being realistic in terms of what to upgrade and what to retrofit.

This will be particularly helpful for legacy data centers where keeping up with PUE requirements has been detrimental to business; elaborate upgrades and design alterations to account for new technology has forced extremely costly solutions to maintain efficiency levels. The separation of MLC and ELC will allow these data centers to make the upgrades they need to as necessary and convenient.

 

Tradeoffs between MLC and ELC allow for a more lenient 80/20 policy.

Perhaps the greatest change of 90.4 is that they allow tradeoffs between MLC and ELC calculations. On their own, each MLC and ELC are upheld to a standard calculation that data centers must either equal or be less than in order to qualify. In contrast to PUE where operators pushed to have a lower number close to 1.0 without any real sense of exactly how efficient their system is, MLC and ELC provide the guidelines to more accurately gauge the efficiency levels of the data center.

Since each calculation is upheld to their own standard, the total efficiency target is merely a sum between the two numbers. ASHRAE determined that as long as a data center’s total calculation is equivalent to or less than this sum, they adhere to the standards set forth in 90.4.

calculatorWhat does this mean for a tradeoff? Imagine that your MLC calculation is below the standard, but your ELC calculation is above it – when you add the two together, the total is equal to the total standard. Even though your ELC is above the standard, the total still qualifies for the standard and therefore, no changes need to be made to meet 90.4 guidelines.

Previously, the above situation would likely mean a data center operator would be impelled to make changes to the electrical system, which may in turn require an unnecessary (if even small) change to the mechanical system to ensure compatibility. Now the operator can rest easy for a while and make the changes as seen fit.

These tradeoffs ensure that only the most out-of-date or inefficient systems will be forced to upgrade – seen as an 80/20 policy. 80% of current data centers will most likely fall under or equal to the standard because of the allowance of tradeoffs between MLC and ELC, and only about 20% will need to make upgrades to meet the criteria.

 

 

ASHRAE 90.4 has gone under several public revisions and has produced many addendums, as these standardizing publications regularly do. Currently the group The Green Grid (who devised PUE in 2006) is reviewing ASHRAE 90.4 and determining if and how to implement MLC and ELC. The removal of PUE will definitely not be an overnight change considering how prevalent the calculation has been over the past decade.

But it is important to understand the difference between PUE and MLC/ELC as we will likely be seeing much more of these calculations in the very short future.

 

You can purchase ASHRAE 90.4 here

Don't Be Afraid of the Cloud

One of the greatest factors preventing companies from moving any of their critical processes and data to the cloud is perhaps one of the simplest concepts: FEAR. And understandably so – it is good to be fearful of anything in control of your critical data, especially if it is something surrounded by much scrutiny.

datacenter unfocused

The cloud is continuously heralded as the future of IT with more and more big names jumping on board and ongoing creation of new associated programs and technology. But that still doesn’t answer the biggest fearful question: “How do I know my data is safe?”

Despite the constant scrutiny, cloud security is an incredible asset with extreme power.

 

Provides Many of the Same Functionalities as Traditional IT Security

  • Protects your information from theft
  • Prevents any data leakage
  • Shields against unwanted deletion
  • Servers are stored and protected within cloud data centers

 

Compliance Requirements in Place to Protect Critical Information

  • Cloud data centers likely have stricter regulatory standards than traditional IT security measures due to multi-tenant retention and the need to overcome regular inspection
  • Support for compliance regulations such as PCI DSS, FISMA, FedRAMP, HIPAA, etc.
  • Extra security programs go above and beyond to keep your information safe
  • With clients such as financial institutions, medical centers, and other enterprises with valuable personal information, customer confidentiality is an utmost priority
  • Since cloud data centers are focused completely on the performance and security of their infrastructure, they are better prepared against attacks than traditional in-house IT

data protection

 

Of course, people do not often enjoy losing “control” of their property. One criticism against cloud is that giving up the control of physical equipment means a higher vulnerability. But it’s important to remember this:

Control does not mean security

Anything that can be accessed from the outside, enterprise OR cloud, has equal chances of being attacked. Attacks are generally attempted via favorable circumstances – if there is an opening in the wall, someone will try to get through it. Therefore, the physical location of your data matters much less than the means of access.

  • Cloud providers go way above and beyond to secure client data, more than any client would typically do in-house
  • Cloud security is not a product or a service, it is a process that is ongoing and always evolving

Leo Reiter, CTO of Nimbix, spoke on the topic of network security: “Cloud providers live, eat, and breathe network security while most other organizations don’t usually list it as one of their core competencies.” It is important to understand at just what length these providers are going to protect your data, and that even with an in-house data center, you are not completely protected in comparison to a cloud data center.

When it comes to the security of your mission critical data, it is more important than ever to educate your employees about what is expected in terms of confidentiality. Insider/employee attacks are the third biggest threat to your data’s security. Keeping your employees aware of the security measures in place and ensuring their compliance will drastically reduce the level of threat to your processes.

Enterprises like Netflix use cloud data centers, despite having hundreds of thousands of customers’ personal data on file. As of January 2016, they have shut down their last data center and are completely running off public cloud. You can read more about their transition here.

netflix

 

Evolve IP is one of those companies that lives and breathes cloud. They have partnered with top security companies such as Cisco, RSA, and Qualsys to provide powerful security solutions. Some of their network security options include:

  • Security Monitoring and Management
    • 24x7 monitoring, event notification, management, and fault resolution
  • Security Event Management (SEM) Service
    • Extends firewall and IDS capabilities – analyzes data in real time and automates the process
  • Cloud Firewall
    • Network-based private firewall – eliminates traffic to premise, increases bandwidth
  • Cloud IDS
    • Network-based active intrusion detection
  • Managed Firewall
    • Premise-based firewall, supports onsite DMZ or management requirements, monitored and managed from cloud
  • Managed IDS
    • Premise-based active intrusion detection
  • Threat/Vulnerability Scan
    • Identify vulnerabilities, ensure regulatory compliance, prioritize remediation – no infrastructure to deploy or manage
  • PCI Compliance
    • Automated PCI scanning from cloud, backed by certified auditing and consulting services

And this is only the tip of the iceberg. There is a multitude of different security solutions and options available to you. All it takes to get started is some research and the willingness to look past the scrutiny to see what works for yourself.

It’s time to start looking into how bringing some of your processes to the cloud can benefit your business. We can help you take a deeper look into which provider to use and the protection you will have against attacks. Give us a call and let’s get started!

 

 

Evolve IP is The Cloud Services Company™. Designed from the beginning to provide organizations with a uni¬fied option for cloud services, Evolve IP enables decision-makers to migrate all or select IT technologies to its award-winning cloud platform. Evolve IP’s combination of security, stability, scalability and lower total cost of ownership is fundamentally superior to outdated legacy systems and other cloud offerings. Today, over 80,000 users across the globe depend daily on Evolve IP for cloud services like virtual servers, desktop services, disaster recovery, unified communications, contact centers and more.

Evolve IP’s Network Security solutions offer businesses a better way to protect their critical business data by delivering turnkey network protection from both internal and external threats without the expense or complication of on-site devices or software.
Evolve-IP-logo

Sources:

https://cloudsecurityalliance.org/
http://www.cio.com/article/2922374/cloud-security/20-of-the-greatest-myths-of-cloud-security.html
http://www.techrepublic.com/article/cloud-security-10-things-you-need-to-know/
https://aws.amazon.com/security/introduction-to-cloud-security/
http://searchcloudcomputing.techtarget.com/opinion/Clouds-are-more-secure-than-traditional-IT-systems-and-heres-why