Puffy White Clouds

Since I’m snowbound I’m working on my latest bit of professional writing. This one is about the latest over-the-top buzzword in my business “Cloud Computing”. This is a work in-progress, so feel free to comment. Hit “reload” every once in a while… I’m hacking it up and reordering as we speak! 😉

Here is a soundtrack to have going as you read this (thanks Nick!)

Orb – Little Fluffy Clouds
Found at bee mp3 search engine

[Andy Rooney] So what is all this buzz about “cloud computing” anyway? I really do not understand it. [/Andy Rooney]

From what you read and hear in the buzz surrounding cloud computing, it sounds like a model for how to do things that will just steamroller over the whole industry and make everything we’ve built over the past two decades obsolete. It will allow things to scale without effort, at minimal cost! It is an on-demand datacenter with ZERO capital outlay! It slices, dices, and juliennes! But even in the best-case it seems like it can only really solve a small subset of the industry’s needs. In the worst case it will be a punch line for lame jokes a few years from now, much like other over-hyped buzzwords from the past.

To be honest, I had not really thought much about cloud computing until I was asked directly about it. So I sat down, looked at everything that was running inside the facilities I manage, pulled out Occam’s Razor and started slicing. The first cut was on myself, or at least on my perspective. As a user, what would I want to put “out in a cloud”? What sub-set of my data could safely run on top of a completely unknown and amorphous infrastructure? As a provider, how could I make the cloud model work? How could I build the hard assets required to run a “cloud” and survive in the marketplace? At one level, I totally get the concept. It is sexy as hell. Total software abstraction from the hardware layer. Stuff running everywhere and anywhere. In reality though, I can’t see how it can come to fruition in the traditional commercial model of setting up as a service provider and charging users for it. Like a centerfold model in the flesh, without benefit of an army of stylists before the shoot and a heavy dose of Photoshop afterwards, the sexiness wears off fast. Cloud computing has a lot of unrealistic hopes and desires obscuring plenty of flaws, blemishes, and unresolved issues.

As a user, I could not immediately think about any process running that I would want to throw out onto a “cloud”, so I started with the stuff I knew I could never let go of. Mind you, not that I wouldn’t want to let go of it, just that there was always some aspect about it that keeps it from leaving the building.

First on the list is something that is fresh on my mind: Payment Card involved and/or ecommerce systems. We just helped a client survive a rather intense PCI-DSS audit. The auditors have a very clear idea of exactly what they want to see in terms of server infrastructure, software configuration, and network deployment. Deviations from the script are hard to get away with. Paramount to everything is the ability to audit. To see where, when, and how payment card data is used. When they ask “where is X?” You have to point to a specific spot (be it a server, a file system, or a database table) and say “X is right there.” You also have to be able to prove that X has not been altered without record of it, nor has ever left the building in an insecure or unencrypted state. So can any of this be trusted to a cloud? I doubt it. A cloud is amorphous and indistinct. It is layer 7 abstracted from all the lower layers. You can’t audit a cloud. It is virtual. Sure, we all know that it translates to a physical manifestation at some point, but can you touch it? Can you audit, with absolute certainty it’s filesystems, logs, and physical access? Can you be absolutely certain that it is physically secure? Can you be absolutely certain that its virtualized filesystems are not mingled on a physical disk with somebody else’s data? ABSOLUTE CERTAINTY is required for compliance. You can’t find absolute certainty out there in a cloud by definition.

What goes for PCI also goes for all those other Fully-Acronym-Compliant compliance regulations out there. HIPAA, SOX, SAS70, GLBA, etc. No matter what industry you operate in, there is some regulations somewhere that you either have to be compliant with now, or will have to be in the near future. Further it is difficult to fully detach those systems that require compliance with other corporate systems that interact with them.

Additionally as so many IT managers have learned through hard lessons, data retention for legal purposes is also vital these days. At an ISP I dealt with data retention requests from various law enforcement as well as State or Federal courts routinely. In corporate environments issues of civil and contractual liability also play into data retention. This has traditionally been in the realm of email, but can theoretically extend to any and all corporate communications, documentation, applications, and data. Frequently this transforms into third parties wanting physical access to the data, and just as importantly, audit trails of who has access to the data and systems. Here again Cloud Computing isn’t going to fly because it lacks the absolute certainty that auditors and legal systems require.

So if you have to have audit-safe data, cloud computing is out. If you have to live by any retention rules, which cover more and more data types each year, the cloud gets rules out. So is cloud computing just a solution in search of a problem? If it can not really contain core corporate data, what is it good for? Well… Edge cases.

If you Google the term “cloud computing success stories” you get lots of press releases from cloud computing providers and startups, but very few actual success stories. Those that are there are all edge cases. Situations where prototype applications endure fast scaling, such as a Facebook plug-in, or video content. Cloud deployment allows a startup with limited capital to ride somebody else’s infrastructure to scale quickly, but what happens when they need to, in that term that Biz Dev types love so much, “Monetize” it? Once you start down that path you become entangled in regulatory and compliance realms. That startup is going to HAVE to deploy some of their own infrastructure to support that, and revert to some hybrid-mode usage of cloud computing. The cloud can not contain anything “critical”, only things that overwhelm your ability to scale them. Even then, that deployment may only be temporary, until you can build up your own infrastructure. A start-up could use the cloud as a crutch until it could stand on it’s own so to speak.

So in the end, the cloud is a place to put things of little importance. Items of a temporary nature. Much of the Internet can be described as items of little importance, so perhaps there is something to the Cloud concept. The hard part then becomes making it pay. So then from the cloud provider’s perspective, how can you build a successful business on temporary items & users? Every successful Internet business has been built on the concept of reoccurring revenue. Being hit-and-run by a series of resource-hogging customers doesn’t sound like sound business strategy to me.

The old adage is true… There Is No Free Lunch.
Those of us who have built and maintained datacenters know that doing so on a scale required to truly handle anything thrown at them know that doing so is NOT cheap. The bill has to be paid at some point. Wildly popular web apps with no revenue won’t pay the cost of the servers, much less the electricity bill. I can’t see how the cloud providers can spend the cash to build out the infrastructure and then have enough margin in the usage charges to enjoy healthy profits. They will have to keep their usage percentages high to stay ahead of the capital expenditure curve. Just like all the previous iterations of shared computing resources in the past though, as actual usage goes up, performance goes down. So if they are successful in keeping usage high, they’ll have to keep spending more capital to expand and upgrade their infrastructure. This sounds like Sisyphus on roller skates.

I always like to boil down complex concepts to overly simple descriptions. They help clarify so much fuzzy thought. For example I have always said that the definition of a datacenter is “A place where electricity gets transformed into bits, on a very large scale.” Think about it, power goes in, bits come out. The by-product of that large scale process is heat, which plays into the definition a tad, but otherwise that is a datacenter in a nutshell. So let’s boil Cloud Computing down to it’s most basic definition: Cloud Computing is Datacenter-on-demand.

Datacenters, as we know, are capital-intensive places. They are expensive to build, and expensive to run. It is very hard to deliver something so large and unwieldy in an instant to meet sudden demand. Even using modular techniques. Demand fluctuates, and unless you are going to charge usurious rates when demand comes in, you will be burning cash at terrifying rates when demand is down. The fire will continue to burn even when demand is moderate. When demand suddenly scales upward, it is unlikely you can meet it, unless you have phenomenal amounts of unused capacity lying around burning capital. You can not have truly scalable, redundant, reliable datacenter infrastructure at low cost. The capital and return on that capital have to come from somewhere. The lifetime of a datacenter facility averages between 5 and 15 years. The lifetime of a server is even less, 18 to 36 months. No Cloud Provider wants to be a break-even prospect, much less a money-losing one. So how will any of them survive unless they charge their users far more than it costs to build and run their facilities? See the bowl-swirling process trap here awaiting the potential Cloud Computing provider?

Another thing to consider: So when the provider goes tango-uniform what happens to all your data out there in the clouds? It evaporates. Good thing it wasn’t anything critical eh?

The only real successful “Cloud Provider” today is Amazon, with their AWS services, and their current stance actually backs up my viewpoint. If you read their User Agreement “carefully” as they request that you do prior to signing up, it lays out a service that really should not be used for anything critical or sensitive. It is clear that their model is selling unused capacity on their own systems, and while they’ll be as nice as they can while you are a (paying) guest there, their needs come first. With anything from 60 down to 5 days notice they can terminate the bargain, with cause or without. They also state that neither security nor uptime is guaranteed and that they can suspend the service pretty much at any time they wish, and have no liability to their customers whatsoever in that event. This works fine for low-usage stuff, non-critical software infrastructure, and meaningless items of temporary interest… but it will not fly for mission-critical corporate IT functions.

Finally, one thing I think happens often in the business is Buzzword Overlap. People throw the Buzzword du Jour at whatever concept they are trying to sell. The overlap I see a lot is the Cloud-space right now is “Software as a Service” aka “SaaS”. SaaS can use a cloud as it’s underlying infrastructure but SaaS is NOT a “cloud.” So before you start firing up a flaming rebuttal to my thoughts, get out your own mental knife and cut away the SaaS components from your Cloud ones. I feel that SaaS and other online applications have a strong future. I look at the stuff running in the facilities I manage and good portions of it are SaaS delivery of some sort. The whole mobile market and most web applications are SaaS of some sort or another. The SaaS market is in its toddlerhood, having evolved from the previous buzzword “Application Service Provider” … same idea, different name. Google for example is not a cloud provider per se, they are an application (search, video, mail, chat, etc) provider who happens to use cloud technologies to support their applications. You don’t buy compute or datacenter capacity directly from Google, you buy application time online. SaaS has a future.

So what does the future hold for Cloud Computing? I think it that as an underlying technology it makes a lot of sense. Anyone developing software should do it with the assumption that it will run across many machines and many locations. As a business model though? If I were a Venture Capitalist I’d be chasing people out of my office as soon as they used the phrase. I foresee a lot of “Cloud Computing” startups evaporating like their namesake.

December Sunrise at digital.forest

The past two days have been a bit surreal. Seattle got socked with a big snow, not long after our big snow up in the foothills. The boys arrived safely in Colorado for their holiday visit to their Grandparents… but I got stuck at the office Thursday night as snow piled up all around us. The roads were insane, which I could plainly see outside my office window. A small sub-set of the staff made it to the office and it was a light-hearted fun day and night. I awoke before dawn this morning and seeing that it was clearing, ran outside and setup my time-lapse gear to grab the above footage. I decided after the sun rose to add a twist to the movie by “sliding” down the hill, making a two-layer set of movement in the video. My camera mount did not allow for smooth movement so it is not as good as it should be but I’ll get that sorted out.

Later I had to post on our support blog about our staffing situation and figured I’d throw the video on there for good measure.

Sorry… been a bad few days.

Started mid-week… Got some bad news (I can’t really talk about.)

Then it started snowing. Never a good thing here in the Pacific Northwest.

Then my laptop died. The old G4 I’ve been driving for over 4 years. I installed the 10.5.6 update and it just rolled over and died. Kernel panic, SPOD, you name it. I tried restoring from backup. I tried re-installing. I even “nuked & paved”… SPOD & Kernel Panic. Sigh. It frustrated me so much that after the last failed re-install I applied some “percussive maintenance” and then tossed it across the room. As you can imagine, it did not take that very well.

I awoke this morning at 4, to over a foot of new snow. The Jetta spent the night in the front of the house, uncovered, as Sue’s CRD and the Jag are sleeping in the garage these days. The boys had a 12:40 flight to catch at Sea-Tac, 70 miles away. I started shovelling out the VW from the drift. I started right up despite the cold. I let it warm up while I got the Liberty CRD out and used it to clear a path down the 1/3rd of a mile down to the road for the Jetta. A couple of laps up and down the drive cleared the snow sufficiently. I packed a shovel, some gloves, and boots, along with the boys’ luggage into the car. The boys awakened and fed, we headed off towards the airport around 8. Once down off the hill and onto the freeway the roads were in much better shape. Still snow covered, but plowed, sanded, and well packed. We were able to move along at a good rate. The snow started again in earnest when we reached downtown Seattle. We arrived in a blizzard, with 2 hours to spare, got them checked in, and then I headed off to the Apple Store at Southcenter to pick up a new laptop.

The parking lot was empty, so I had a little fun doing handbrake turns and generally hooning about for a bit to improve my mood.

Grabbed a MacBook Pro. Most people would be thrilled about this. I was just grumpy.

Drove to my office through and around closed freeways, accidents, jack-knifed semis, and idiots.

Spent the rest of my day configuring a new machine and transferring my data to it. I have plenty of backups, and was able to restore an image of my previous laptop to an external hard drive yesterday. Moving to the new machine SHOULD be easy with Migration Assistant? Forget it. Failed twice.

Had to move everything over by hand. Still dealing with the fallout of that. iPhoto is the issue I’m dealing with at the moment. Sigh.

BastionHost Buys Nova Scotia Data Bunker « Data Center Knowledge

Future Home of a Colocation Facility?

BastionHost Buys Nova Scotia Data Bunker « Data Center Knowledge.

I always do a “rollseyes” when I see these “Datacenter in a Cold War Bunker” stories. One because they are just silly when they tout the “can survive a nuclear strike” capabilities… look, if ICBM’s are falling out of the sky, we’ve got much bigger problems than website uptime!

But wait... I need my email!!!

Second, the facilities in question were designed to house PEOPLE, not datacenters. The power & cooling infrastructure is designed to support something like 90 Watts per square foot at MOST. Datacenter these days wants 500 Watts per square foot minimum. Additionally, the infrastructure is all over FORTY YEARS OLD!

Dude, your draining the amps I need to run the cages next door, knock it off!

To relate it to something most of my readers can understand, that is like asking a early or mid-60s race car to be competitive today. First you have to completely restore it, rebuild it with all manner of modern upgrades, then watch as the new cars pass you like you are going backwards.

Sure the James Bond Supervillian image is cool for about 30 seconds. But after that, you have a facility that can never truly compete without dumping cubic tons of money into it.

This market can’t support the “bunker” model unless the grid power available to it is dirt cheap, and you’ve basically gutted the bunker and completely rebuilt it. At that point what do you have that is competitive?

Oh yeah, nuclear strike survival. When that becomes a selling point I’m getting out of this business.

Site Maintenance.

Today is my 45th birthday. I’m celebrating by quaffing a bit of bubbly, processing some waste veggie oil into BioDiesel, and upgrading WordPress here on my website. Be patient while I perform this task. Be back soon!

Update, Monday: Well… that was fun! It seems I broke it for a while. Rog was right, I should have slowed down on the drinking and sped up on the RTFM’ing! 😉

The site didn’t break for any of you (except a few display items) but I managed to lock myself out of all the admin functions for about 18 hours. Logging in just sent me into a loop, as did doing a password reset… or even manually editing my user entry in the SQL database. Finally 9at the suggestion of one of my staff late last night (Thanks Josh!) I yanked all the site plugins and blew away my user passwd in SQL. That did the trick. I logged back in, was able to complete the upgrade, then went to bed. Finally fixed my passwd via a normal reset this afternoon and it seems all is well. I’ll start re-plugging my plug-ins again when I have some time.

Thanks for you patience and birthday wishes!.. speaking of which Sue & Nick took me to The Keg with my free birthday dinner coupon. I had a great steak (with bleu cheese & garlic… which woke me up later!) and some awesome wine.

An old habit dies… hard.

I have a confession to make: I’ve been using the same email user agent for about eighteen years. Yes… EIGHTEEN years. How many software products from 1990 do you still use?

In 1990 I was using a Macintosh IIsi, System 6.0.7, and Eudora 1. If I recall correctly it was version 1.3 or 1.5. I used my wife’s student account at the University of Washington to get online at first. A shell account on a UNIX host, a newsfeed (Newswatcher!) and trusty old Eudora for reading mail. I had a Hayes 2400baud modem at first, then I joined the 90s eventually with a Prometheus 14.4k modem, with built-in fax AND voicemail. (I was doing full-blown telephony in 1991!)

But trusty old Eudora was my mailer. It stayed my mailer.

I went through many machines (MacII, Centris 650, PowerBook 170, Duos, the infamous green 2400c subnotebook, iMacs, G4s, a TiBook that wheezed itself to death eventually, and now my current, though aging aluminum G4 PowerBook.) But Eudora remained my mailer.

I upgraded operating systems (System 7, OS8, did my best to skip OS9, jumped to X when it finally stabilized, through all the iterations of OSX up to 10.4) and Eudora kept on chugging. I managed to keep just about every bit of mail I had sent or received from about 1994 through 1998… when the great Jaz drive failure hit me as I was moving machines in the UK. Did I give up? Nope, I just started again.

Now I have just about every mail I have sent or received since 1998… all carried around in a pair of “Eudora Folders” on my hard drive (and backed up here, there, and everywhere!)

I have adapted to Eudora and it has adapted to me.

I have two distinct mail modes: work and non-work. I don’t read non-work email at work (except around lunchtime) and I TRY not to read work-related email when I am not at work, at least not on my laptop (that is what my Blackberry is for!) I have YEARS of well-tuned mail filters built (I should screen-shot them… they would astound you! Want to see them? Ask in the comments) and a signature file that is very long (it is how I have packed the “random quotes” here on my site.)

Unfortunately Qualcomm announced Eudora’s demise a while back and I knew this day would come. I test drove several other mail clients, but to be honest… all of them sucked. I know people think Eudora sucked, but it worked for me and I liked it. Hell, I stuck with it for EIGHTEEN YEARS!

I thought about Entourage. Yuck. Way too MS Office-ish. That big honking monolithic mail database terrifies me. Eudora has always stored mail is unix mbox format – plain old text files. Dealing with a corruption was just a matter of firing up BBEdit or vi. Clickty-click. I think that has happened to me three times in 18 years. I have known way too many folks who have had one form or another of Microsoft mail database files go tango uniform on them at inopportune moments. Frequently. No thanks.

I tried Mail.app. I really did. Inertia almost drove me there. It was the one I have test driven the longest. But the rules/filtering is just abysmal compared to Eudora. The mailbox handling lame. And I noted that it becomes a complete pig when you try to deal with large volumes of mail like I do. Searching through my multi-gig mailing list archives for some string of words? Seconds in Eudora! Minutes or a system crash in Mail.app. Yuck.

I’m planning a jump to OSX 10.5, mostly so I can support my family members who all use it. There have been issues reported for the last version of Eudora (6.2) on the latest OS from Apple. I figured now is the time to make the leap away from my old friend.

I thought about Odysseus, as it is billed as a modern replacement for Eudora. However it seems to be in perpetual beta, that seems more like alpha from the users I’ve talked to.

I looked at Thunderbird. No thanks. The UI is just … well… bleagh.

I stumbled across a likely little application that seems to fit the bill: Gyazmail. It has a very flexible UI that allows me to make it behave very Eudora-like when I want it to. It has very good search, rules, and filters. It can import all my old mail(!)

I’m test driving it at the moment and liking it so far. Switched my work mail to it late last week, and my personal mail is still coming over one account at a time. So far so good. If you regularly contact me via email be patient while I work through this transition period.

Good-bye Eudora… it has been a good 18 years.