After years of decline, data centers are back

After years of decline, data centers are back

Finally. The press has caught onto our (pardon the pun) current reality. *

They are, as per usual in the press, wrong on the details. The industry has never been in “decline”, but it is in good shape at the moment.

Power is the name of the game, and we actually have it. Internap, at least in Seattle, does not. We are at 10% of our available capacity and we have the right amount of floor space to handle a fairly dense install. InterNap in contrast limits their customers at Fisher Plaza to half racks by limiting their power.

Back in the bad days of 2001-2003, we saw a small burst of growth as colo facilities in the Seattle market started closing. Exodus, Colo.com, Verio, Level 3, Qwest, Zama, etc were shutting down unprofitable datacenters. Some just vanished, others relocated their customers to California, or Colorado. We were able to pick up a lot of new business in colocation at the time, ironically because we were a small, local company with almost a decade of history… unlike a large, non-local company, which was brand new and burning a pile of VC cash. Suddenly the desire was for businesses doing the new-economy in an old-fashioned way – with revenue from customers, not investment from venture capital funds.

The one company that consistently won the larger business (multi-rack installs) away from us was InterNap. Prior to the new economy meltdown of 2001, digital.forest was a value-priced colocation facility. We were priced at a sustainable level, but still much lower than Exodus, or similar facilities. We had to be as we were small, and our facility was at that time, very second-tier. InterNap in contrast was in a brand-new, state-of-the-art facility in Seattle’s Fisher Plaza. What compounded the problem for us was that InterNap was losing money hand over fist, but were buoyed by cash from a pre-crash IPO. They had a huge facility to fill, and were practically giving it away. Their prices were unsustainable… insanely low. A full 50% below our pre-crash prices, sometimes even lower. We didn’t have a mountain of cash to burn, nor did we have thousands of square feet of empty datacenter, so we couldn’t match those prices and lost virtually every one of the larger bids we put out to InterNap.

A few years later, we moved into one of those “state-of-the-art” colocation facilities left behind by a failing “dotcom” and suddenly we find ourselves in a facility equal, if not better than, InterNap’s at Fisher Plaza. I’d argue the latter position as we have a landlord (Sabey) that truly understands our business model (they built most of these facilities for the now-mostly-gone Colo providers) and agreed to let us manage our own facility infrastructure. We maintain the HVAC and backup power systems directly, rather than second-hand via the landlord, as with Fisher. So as the datacenter economic landscape has been improving over the last few years** prices have once again started to rise and are just now reaching sustainable levels again. But, just as InterNap are going back to those clients they gave away space to back in the day to tell them about higher prices, they also can not provide them with any more power. I said it back in 2002, and I’ll say it again now: “I encourage my competitors to operate this way.”

Today’s savvy colocation customer expects to pay market rate for rack- or floor-space, but they also expect to have custom power solutions delivered to their racks. The industry standard of two 20 amp circuits per rack went out with the Pentium III. Until the computer hardware and chip industry can get their act together and get power consumption under control, today’s racks require 60 amps, or MORE.


Above: near maximum density before going to blade servers. A digital.forest client’s installation of 1u & 2u multi-cpu servers, plus 3U disk arrays of storage. This client has been growing at a “rack per quarter” rate for the past year. They’ll be moving into a cage in our new expanded space around the holidays.

We are finally in the position to turn the table on InterNap, as we have space, and more importantly, power & cooling capacity to spare, right as the market heat is approaching “boil”. For once, we are sitting on the right side of the supply/demand curve.

* I’m linking to the story in the PI, mostly because it is a local paper about a local company (well, USED to be local) in our industry. I first read it online in the New York Times, but they have that massively annoying “free registration system” and I don’t know how many of my readers are savvy enough to get through that with “bugmenot”… thankfully the Seattle PI picked it up.

** I’d say that the datacenter economy never really went down. If anything the growth of it has been a rock-steady linear graph since our beginnings in 1994. The only odd year was 1999, when it experienced a doubling, but every other year has seen roughly 50% growth, even the “bad” years of 2001-2002. What happened was that between 1998 and 2001 the industry overbuilt capacity. Everybody was investing in datacenters and as a result a classic over-supply, under-demand situation arose that artificially depressed the datacenter industry.

I feel the earth move, under my… um… butt.

I was sitting in the living room, enjoying a wee dram of Glenmorangie, when I felt the whole house wiggle. I iChatted a friend, who lives in Olympia, which is 100 miles away from here, and asked if he felt it. He said no, so I knew it could not be a very widespread one.

The image above is from a the nearest seismograph from my house in the UW network. It is located in Trafton, just off of Jim Creek road, which is as the crow flies maybe a kilometer from my home.

I have no idea where exactly on the Richter it measured, but my guess would be in the “low fours”.

GOES Satellite image

So, it isn’t a car picture I know… but I like it.

This is an image captured via the GOES widget yesterday morning. I love several things about this image… The coastal fog, and how it describes the valleys along Gray’s Harbor and the Columbia River so well. There is also morning fog through the Snohomish River valley. Most impressive though is the massive bulk of Mt. Rainier, rising up and dominating its corner of Pierce county, the rising sun illuminating the glaciated eastern slopes, and casting a dark shadow to the west.

If you look closely you can make out the forms of other mountains; Olympus, Baker, Shuksan, Glacier Peak, Mt. Adams, and even Hood and Jefferson in Oregon, but none stand out like Rainier.

I spent yesterday travelling from our house to Roche Harbor on San Juan Island (and back) to attend a wedding of a friend and colleague at digital.forest. Dave & Tanya Anderson married each other on a the day that dawned in the image above… in perhaps one of the most beautiful places in the world.

We drove out to Anacortes, and boarded the 11 am ferry to Friday Harbor, where we enjoyed a lunch, had a brief stop at the “English Encampment (from the “Pig War” that established the final boundary between the US & Canada. We’ve been to the American Camp before, but had never yet visited the English one.) Then on to the wedding ceremony and reception in the garden of the Hotel de Haro at Roche Harbor. It was a truly wonderful day. We returned via the 10PM ferry which stops at every ferry-serviced island in the San Juans, which allowed us a nice car-deck nap of two hours, interrupted only by the occasional docking and an idiot in an Audi who set his frigging car alarm when he wandered off to the passenger deck. (Thankfully the WSF tracks these idiots down and delivers public embarrassment.)

Apple Announces Intel Xserve

MacSlash | Apple Announces Intel Xserve

OK, so I’ve never really developed this site into a “technology pundit’s page” like so many of my friends have (see blogroll), so I’ll point you to some comments I made about the new Xserves from Apple on MacSlash.

I REALLY wish that server makers would get out of this “must be ONE RACK UNIT” rut they are in. To achieve this supposed holy grail of server size they are getting completely absurd in the one dimension nobody talks about… namely depth. To Apple’s credit, they’ve given a center-mount option to the Xserve since day-one, but it still is way too long. The original is 28″ long and this new Intel-CPU’ed Xserve iteration adds another 2″ to that, to now be 30″ long.

I’m sorry folks, that’s beyond absurd. It is ludicrous.

I’ve always maintained that Dell does it to sell their own proprietary cabinets. Apple has no such excuse. I wonder where they’ve added the depth in relation to the center mount area? At the back? In the front? 1″ in both directions? It should make adding a Xeon Xserve a challenge to an already populated rack or cabinet of Xserves!

We use awesome Seismic Zone Four rated cabinets from B-Line, which are adjustable with regards to the mounting rails, but once set, you really don’t want to move them. If you put a server that is 28″ or longer into them the cable management starts getting tough and ends up presenting a real impediment to air flow. With the Dell gear we have to just remove the doors to make it work, which when you think about it, pretty much negates the whole reason for putting a server in a cabinet! The majority of our Xserves are mounted in “open” Chatsworth racks. Those excellent and bullet-proof workhorses on the high-tech world. This removes all the airflow issues, but row density suffers because you have to accommodate the Xserve, the cables, the people space front and back, PLUS the space to fully slide the Xserve chassis open and not interfere with the row of servers in front of it. I realize what I’m about to say is counter-intuitive, but here is some reality for you:

1U servers such as the Apple Xserve actually lower your possible density of installation.

I’ll repeat…

1U servers such as the Apple Xserve actually lower your possible density of installation.

I could have a far more efficient datacenter layout with 2U servers if their form factor was 2U x 18″ x 18″. This would allow me to space my ROWS of racks closer together, and more importantly maximize my electrical power per square foot far more efficiently than with 1U boxes. If you do the math on Apple’s new Xeon Xserve the theoretical maximum electrical draw of a rack full of them is 336 Amps @ 120 Volts. Of course servers rarely run at their maximums, but that is a terrifying number. The “standard” amount of power per-rack in the business these days is 20-60 Amps. Given that it is in reality IMPOSSIBLE to have a rack fully populated with 1U/2PSU boxes due to the cable management nightmare of power cords, and the heat load of putting so much power in so small a space, why bother building 1U boxes? Why add insult to injury by making them as long as an aircraft carrier deck too?

THIS is the ideal size for a server. 2U in height, and rougly 18″ square in the other 2 dimensions. It makes for perfect rack density, row density, and the most efficient use of power (and of course cooling) per square foot of datacenter space. Airflow becomes manageable. Cable management much easier. Storage options more flexible. Heat issues minimized. etc. Do any of the server makers ever visit datacenters? Or do they just assume that 1U is what people want? Do they just listen to trade rags (written by people who sell advertising, not run datacenters!) or do they actually get out in the field and talk to facility operators?

I wonder.

My other beef with the Xserve has been Apple’s complete “slave to fashion” reluctance to put USEFUL ports on the FRONT of the unit. They REALLY need to put the USB and video ports on the front of the Xserve, NOT the back. Why force somebody who has to work at the console (and trust me OS X Server isn’t mature and stable enough to run headless forever… ) to work in the HOT AISLE? The backside of a stack of servers is HOT, and a very uncomfortable place to work. If you put the ports on the front, where the power button and optical drive are located already, there will never be a need to walk all the way around the row of racks and try to remember which server was the one you were working on. Apple actually did a hardware hack (with buttons on one side flashing lights on the other, to fix this design flaw. In reality the only time you really SHOULD be looking at the back of one of these servers is when you are installing it. After that, all admin functions should be performed from the front side of the server.

Again, makes you wonder if Apple actually spent any time in a datacenter or considered any functionality in their design, or was it just meant to look good in a glossy brochure or on a trade show floor?

Technology moves forward, right?

Ten years ago, I spent a year where I was basically “commuting” between Seattle and London. I shed laptop, and desktop computers and made an Apple Newton 2000 my primary computer for that year. I did all my email, systems administration, etc from that little paperback book sized unit. While in Europe I used a GSM card and my Nokia phone for connectivity (via a RAS server in our London office), and in the US I used a Ricochet Wireless Modem, from Metricom (may they Rest In Peace.)

Nowadays I carry around a 15″ G4 powerbook and a PalmOne Treo 600 (yes, I’m always a bit behind the bleeding edge these days!) and I find that in many ways I miss that Newton. I found it funny to see this online today:

Two gadgets, Ten years between them, One fight to the death!

Amusingly enough, the Newton wins.

For all the crap that Apple got for the early versions of the Newt, they did get their feces amalgamated pretty damn fast, and by the time the 130 shipped with NewtonOS 2.0, the damn thing was rock-solid. But, like so many other first-to-market technology, the early stumbles never allowed them to catch on as well as they should. Only those of us that fully took the plunge ever really understood how good a miniature computing device could be. Now were just awash in mediocrity and end up toting a bag full of crap around.

–chuck

“Mr. President, I think you may be missing the point…”

My son, who is angling for permission to purchase this particular game, showed me this link. It is.. Hilarious.

The voice actors do an awesome job of recreating the cadence, inflections, demeanor, and mispronunciations (not to mention the malapropisms!) of their subjects. I love how the “Bush” character goes so well from the “scripted” to the “unscripted” Bush. I love how the Tony Blair one politely, and indirectly (but with that wonderful restrained English disdain) corrects Bush’s abuse of the native tongue.

It is dead-on accurate. Well done.

Having played host to “viral marketing” websites prior to the whole youtube/googlevideo phenomena, I’m a bit familiar with the genre. Unfortunately these things so often fly under the radar of mainstream media. Too bad that the publisher can’t afford airtime on a major network because I’d love to witness the furor that this would generate if aired during “Idol” or even better Fox News.

But then I’ve always been one of those smirking $#!+ stirrers. 😉

–chuck

site rank

Just so you know, this isn’t exactly a technorati realm blog… according to Netcraft, this site ranks as the 8,717,350th most popular website on the planet!

That is going by the name chuck.goolsbee.org, it ranks 8,789,401th by its other name blog.goolsbee.org.

I stumbled upon this stat while doing some work investigating bandwidth usage by some of our clients. We have some folks that pull in a lot of traffic for reasons that are not readily apparent. Usually though they are really obvious, such as:

Adam Engst’s TidBITs at 69,044
Glenn F’s isbn.nu site weighing in at 99,231
The MacSlash boys at 10,542 (a shockingly high rank, way to go guys)
Shawn King’s Your Mac Life at 67,059
John Rizzo’s MacWindows resource site at 24,534
The Steves, who have several sites high in the rankings such as BidNip at 23,828, and cheatcodes.com at 56,623.
Perennial d.f favorites Car*Toys at 74,450
And at the top of the heap, Neoseeker at 4,045

The one that caught me by surprise?

bbs.trailersailor.com at 21,007