New ultrabook design decisions limit data recovery options
When my girlfriend recently dropped (and killed) her new ultrabook, I found out the Solid State Drive (SSD) inside of it was not easily removable like the old hard drives always were. This means a dead laptop = completely lost data if you aren’t fully backed up. Now don’t get me wrong, storing my valuable data on precariously spinning platters inside laptop hard drives has always worried me. Even before I co-founded an online backup company, I’d wince when a friend dropped a laptop roughly on a desk or spun the laptop around with a fast nudge so others could see the screen. So as soon as I could, I moved over to using SSDs that contain no moving parts. The new SSDs are more reliable than the old spinning platters overall, but the current designs cause catastrophic failure modes that will result in MORE data loss in many situations.
You may not be familiar with the name Crispin Porter + Bogusky, but you’re probably familiar with their work. The firm, which was named U.S. Agency of the Year by Adweek last year, created “The King” and “Whopper Freakout” campaigns for Burger King; the Windows Mojave, Jerry Seinfeld/Bill Gates and I’m a PC campaigns for Microsoft; as well as ads for Guitar Hero, Old Navy, Best Buy, Coke Zero, and others.
For the previous four years they have also been the official U.S. agency for Volkswagen and have created a lot of media during that time. So, when it came time to archive all of that media somewhere…they decided to build their own Backblaze Storage Pod.
Ryan Banham, Windows Evangelist at Crispin Porter + Bogusky took on the task:
Just as everyone is settling down for a big turkey dinner our first
Backblaze storage pod will be preparing to feast on terabytes of data.
He customized the Backblaze storage pod reference design with a different motherboard, more memory, Samsung instead of Seagate drives, a single power supply, and used Windows Server 2008 as the operating system. It’s great to see people making the design suit their particular purpose. Once he hones in on the final design for their purpose, he plans to deploy several racks of mirrored archive servers to support their storage needs.
Some of the feedback Ryan provided to us on his customized version included:
* The pod is uber-cool: Even under full load the drives stay under 72 F, so he also swapped our fans for quieter and lower power intake fans.
* No trampolines for the pod: Moving the pod around requires the RAID cards to be reseated (possibly because the bottoms of the RAID cards stick out of the case.)
* $20 gets you far. A pod running 50% – 75% of the month costs just $20 in electricity.
Thanks for sharing the build and giving me something fun and
interesting to do over the last few weeks! I learned a lot.
Glad it was interesting and useful and thank you for sharing your learnings!
Photos Ryan sent us of his pod:
What do Google providing search, Coca-Cola operating its systems to track inventory, and Backblaze backing up your data have in common? The computers that handle all of this live in data centers. And those data centers use power – lots of it.
In the U.S. alone there are over 20,000 data centers – each of which houses thousands or tens of thousands of servers. Combined, these data centers make up 3% of all U.S. energy consumption (not just electricity) – more than the entire domestic air fleet.
So when I went to an event on Wednesday called:
THE TRUTH ABOUT THE FUTURE OF THE DATA CENTER:
CLOUD, COLOCATION, & DATA CENTER REAL ESTATE
it should be no surprise that the focus was on power, power, power.
And lest you think this is people getting wrapped up in the green movement or just jumping on a marketing trend – let me dissuade you. Datacenters in the U.S. spend $23 billion a year on electricity according to KC Mares of MegaWatt Consulting. In fact, electricity can often cost over 50% of the purchase price of a server over it’s lifetime. Minor improvements can have massive implications not only on global warming but also company bottom lines.
KC provided a fascinating overview of innovations and experiments that operators of data centers and the companies building out large server deployments are pursuing. Some examples:
* VFDs – variable frequency drives to adjust the speed of blower fans that adjust to need rather than spinning at a constant rate.
* Natural cooling – using outside air and fans rather than air-conditioning to keep data centers cool; it turns out most servers are perfectly happy running at temperatures much higher than what data centers attempt to keep them at.
* Shorter cooling regions – having air flow almost directly around a server in the process of cooling it rather than through the entire building; shorter distances mean less air friction and less energy spent moving it around.
* Eliminating UPS systems – getting rid of the backup power systems and assuming servers will go down…and having backup servers or data centers instead.
* Using 480 volts – higher voltage means lower amperage and thus less heat loss and higher efficiency. More of today’s server systems are capable of handling this voltage.
* Higher efficiency power supplies – switching to 90% efficient power supplies on servers rather than using 70% or 80% ones; these are more expensive upfront but can still pay off fairly quickly.
A number of these items pay for themselves in a couple months and then generate savings ongoing from then on. KC has a variety of information on his site and blog.
Don Honabach has the honor of being the first person to successfully build his own Backblaze storage pod. (At least the first we know about.)
With four servers running at home for media storage, Don, was using a fair bit of power (and probably generating a lot of heat and noise and taking up space.) For five years he was working to come up with an “Extreme Media Server” and after reading about the Backblaze storage pod, he decided this may be the way to go.
Having expertise in the space, Don customized a variety of items in the pod including:
* The operating system (switching to Microsoft Windows Server 2008 R2)
* Power supplies
* and more…
In just a couple weeks Don had completed his “Extreme Media Server”. Combining all four servers into one, Don is saving 500 watts of power, and can run 16 independent movie streams across two monitors from a single storage pod.
Don created a blog that describes his experiences building his Extreme Media Server.
Congratulations Don and good luck watching all those movies at the same time!
Last month’s blog post about building our Backblaze storage pods generated a ton of interest and many people are building their own pods! Our post also generated a ton of questions so below we answer the common ones and provides more detail about where to get components.
Three weeks ago we published how to build a Backblaze Storage Pod, the cloud storage hardware we use for our unlimited online backup service, and gave away the design to anyone who wished to build their own. We thought a few people might find it interesting. Perhaps some might even want to try to build one. We never expected would happen next.
Om Malik wrote about it at GigaOm, as did Robin Harris at StorageMojo, and Cory Doctorow on Boing Boing. Soon after, CrunchGear, VentureBeat, ZDNet, Mashable, TUAW, Electronista, MacWorld, Vator.tv, NetworkComputing, On-Storage, PSFK, Enterprise Storage Forum, eWeek and dozens of others picked it up. After digging in, SmallNetBuilder did a thorough breakdown for its DIY audience.
At Backblaze, we provide unlimited storage to our customers for only $5 per month, so we had to figure out how to store hundreds of petabytes of customer data in a reliable, scalable way—and keep our costs low. After looking at several overpriced commercial solutions, we decided to build our own custom Backblaze Storage Pods: 67 terabyte 4U servers for $7,867.
What we actually provide for our customers is online backup for home and business online backup for work. However, in this post, we’ll share how to make one of these storage pods, and you’re welcome to use this design. Our hope is that by sharing, others can benefit and, ultimately, refine this concept and send improvements back to us. Evolving and lowering costs is critical to our continuing success at Backblaze.
Below is a video that shows a 3-D model of the Backblaze Storage Pod. Continue reading to learn the exact details of the design.
I’ve had a lot of success in my 20 year software engineering career with developing cross platform ‘C’ and ‘C++’ code. At Backblaze, we just released the Mac beta version of our online backup service, so I thought it an apt time to discuss my 10 rules for writing cross-platform code. We develop an online backup product where a small desktop component (running on either Windows or Macintosh) encrypts and then transmits users’ files across the internet to our datacenters (running Linux.) We use the same ‘C’ and ‘C++’ libraries on Windows, Mac, and Linux interchangeably. I estimate it slows down software development by about 5 percent overall to support all three platforms. However, I run into other developers or software managers who mistakenly think cross platform code is difficult, or might double or triple the development schedules. This misconception is based on their bad experiences with badly run porting efforts. So this article is to quickly outline the 10 simple rules I live by to achieve efficient cross platform code development.