Behold! The awesomeness of Science & Engineering…

STS-129 Ascent Video Highlights from mike interbartolo on Vimeo.

If this video doesn’t leave you in awe I suggest you check your pulse.

As a photographer, occasional videographer, and a technology professional, I’m awed on many levels.

Visually this is stunning, as it captures many viewpoints that I, as an observer would want to see as the proverbial “fly on the wall”… it is often these unusual perspectives and wide-angle lenses that produce the most dramatic images. My hat is off to the team who produced this masterpiece as it not only answers questions (without a single word!) of HOW so many of the mechanics of shuttle launches work, but does so in an phenomenally artistic way. As a child I watched Apollo missions on TV and the limitation of shape defined the images sent back to earth. Here, the Shuttle’s odd shape abets the use of angles that the Saturn rockets could never provide, and we benefit by being able to see in vivid detail the launch mechanics, the flight path, the release and return of the jettisoned bits**, and of course our beautiful home world spiraling away beneath.

This is so much better than the illustrations and animations I saw as a child. **As a kid I always wanted to see the perspective of the Saturn rocket booster stages watching the vehicle flying away (rather than vise-versa) and following it’s view as it tumbled back into the atmosphere and theoretically, the ocean. Now I can watch what I’ve always wanted to watch.

Finally as a technology professional I am reminded that there are people way smarter than I, who have dedicated their lives to the progress of our species. Taking scientific theory and cutting edge technologies, putting them to practical use in the exploration of the universe beyond our little blue planet. I’m privileged to have met and count among my friends people who do this sort of thing, and it makes me proud to be a human being. While this short film is but a drop in the vast ocean of knowledge on the subject, it serves to remind us earthbound many, of the efforts of the sky-gazing few. Keep up the good work folks.

McHugh Plans Major Chicago Data Center « Data Center Knowledge

McHugh Plans Major Chicago Data Center – Data Center Knowledge.

I found one phrase very interesting in this post on Rich Miller’s excellent “datacenterknowledge” blog:

“…just blocks from the city’s major Internet connectivity hub.”

In military parlance this is called “fighting the last war.” Connectivity was the largest issue facing those of us who were building datacenters a decade ago. Getting onto “the wire” was really the hardest part and the availability of fiber-optic networks was by far the premier consideration when seeking a site for a datacenter. Back then, bringing fiber into the facility from even moderate distances was very expensive. A datacenter is a place where electricity is transformed into bits, on an industrial scale. Power goes in, bits go out over those fiber-optic networks. A decade ago getting to those bits was the hard part. It was expensive, and time consuming.

How times have changed. Today’s premier consideration in datacenter site selection is even more basic: electricity. How much and how cheap? Even a large facility’s output can be handled with a couple of bundles of fiber-optic cable, but the electrical input needs have grown enormous. Moore’s Law has a downside, and that is power consumption. Today’s servers burn up Watts at a rate their forebears a decade ago could only dream about. Today’s datacenter needs at minimum 5X the power it required in 1999, ideally much more. The rate charged for that electricity is even more critical when it comes to site selection. This is why those at the leading edge of this business are building in places like the Columbia Valley. Home to more than just great vineyards, it is also where “green” hydro & wind power can be purchased at well under 1¢—3¢ per kilowatt-hour. Contrast that with rates in Illinois averaging 7¢—9¢ per kW/hr. Over the useful life of the facility that difference could be hundreds of thousands, if not millions of dollars.

Digging a little deeper in the story is seems that this facility was originally planned a decade ago, and was delayed when datacenter oversupply stalled facility building projects in 2001. Datacenter demand kept growing, and continues to grow at a healthy rate. Healthy enough to fill this Chicago facility when it is complete, I’m sure. The smart money however, will go to the places where operations costs are the lowest, which is next to a dam somewhere. Wenatchee, Quincy, The Dalles. Those places are the future of the datacenter industry.

..This whole cloud thing.. The Five Big Fallacies surrounding Cloud Computing

Puffy little clouds...

An old college buddy called me last week. He works for a manufacturer who sells their product into datacenters (among other markets) and he wanted my insight, as a datacenter professional, into “This whole cloud thing”. It seems a colleague of his is trying to convince his whole company to prepare for the cloud computing paradigm shift, where “everything will exist within about 10 huge datacenters.” I have to admit I laughed when I heard that. My friend wanted to know what impact cloud computing will have on the future… “is what this guy saying really going to happen?”

In my answer I went over how much of the thinking and hype surrounding cloud computing is built upon fallacies, while ignoring the market realities. Let me outline those fallacies here:

Fallacy #1. New technology always supersedes old technology.
I wish I had a dollar for every time I heard or read something akin to “Everything will move to the cloud.” The basis of this statement is a deeply-held fallacy in the minds of so many people who follow technology, most of all the “pundits” one reads in the trade rags and blogs. Everything? Really? This fallacy is an extension of an old economic fallacy, that of the limited market. The more gains X makes in a market means that Y & Z lose. For some reason, perhaps the desire of people to sort everyone and everything into piles named ‘winner’ and ‘loser’, they assume that in any given market there can only be one winner and everyone and everything else loses. So all thinking becomes weighted towards the supposed winner. In reality markets are in constant states of shift, and for all practical purposes have no limits. Further if you have a product or service that satisfies a set of customer needs better than the competition, then you will make sales. Not all customers have the same needs, so there is no way that any single technology, service, or business model can exclude all others by its mere presence in the marketplace. Indeed for “everything to move to the cloud” then the cloud has to become the solution to all the needs of all the people. This is impossible. Additionally needs change as conditions and states change. What I need now as I sit at home will be different from what I need sitting at my desk at work tomorrow, or while traveling next week. Cloud delivery assumes pervasive, persistent connectivity, which does not exist, and frankly likely never will. Technological tools should always have some usefulness in stand-alone, disconnected installations, otherwise they are just expensive, not to mention inefficient boat anchors. The new replacing the old seems like the natural order of things, after all we no longer drive horse-drawn buggies right? One could argue that the modern car is the natural evolution of the buggy, with the internal combustion engine merely replacing the horse itself as the prime motivator. The aircraft did not replace the car, despite nearly all pundits in the middle of the 20th century predicting it. (Where are our flying cars anyway?) Nor have many other older technologies vanished; trains and ships still traverse the planet despite much “better” technologies having been developed since their invention. Television hasn’t fully replaced radio or movies. Those markets instead have expanded to allow all these technologies to survive in their own niches. Those niches may expand and shrink over time, but the new technology rarely, if ever completely replaces the old. Bringing it back to information technology, even mainframes are still being built and sold, despite their perception of being technological dinosaurs. Why? Because they serve a need that can not be met by newer technologies. If anything the market for mainframes remains about what it was a decade ago. Sure the market isn’t growing like it was in the 1960s, but it is likely actually larger in terms of physical units operating than it was back then. Cloud computing, even if it is wildly successful will not replace the forms of computing we use today, it will only expand the markets and provide new solutions to some old, but mostly new solutions. That is where technologies really bloom and create markets is when they solve new problems not replace the solutions for old ones.

Fallacy #2. Cloud computing is new technology.
It is not a new technology, just a new name. (See RFC1925, Item 11 & corollary.) At its core “cloud computing” represents no new technology. It is just a buzzword-du-jour being applied to a collection of older technologies being packaged and sold in a new way. I’ve heard the term used to describe everything from Amazon’s EC2 to Skype, from Gmail to Salesforce.com. I find it hard to believe that these all fall into a single definition. Their sole commonality is that they are services delivered over the Internet. That would make my little website here part of The Cloud, and I can tell you it required no recent technological paradigm shift to spring into existence. It seems I’m not the only one with this opinion. If anything I would argue that cloud computing isn’t really about technology at all, but really a way of provisioning and selling computation. Before it was called “cloud computing” it was called various names at various times, as the concept iterated itself though history: Time Sharing, Client/Server, Network Computing, Thin Clients, Utility Computing, Application Service Provider, Grid Computing, Software/Platform/Infrastructure as a Service, etc. None of these terms including “cloud computing” describe any new technology, only ways of delivering or provisioning existing technology. It boils down to rental rather than purchase, period. If I rent you my car I can not claim to have invented the automobile. Or the concept of renting it either.

Fallacy #3. Cloud computing will replace datacenters.
I’ve heard this fallacy from many sources, not just my friend’s colleague’s claim of the future where “everything will exist within about 10 huge datacenters.” Cloud computing represents no threat to the datacenter whatsoever. If anything it will just require MORE datacenters. That answers my friend’s worry, but how did this fallacy originate? Well, datacenters are very expensive. They are very expensive to build and very expensive to operate. As power densities (that is the amount of Watts per square unit of measure available within a given datacenter) go up so do construction costs. The current average cost to build a datacenter in the USA up to modern standards is between $1500 and $3000 per square foot. Compare this to the $150-$200 per square foot cost of the average office building in the USA and you’ll understand why CFO’s tell their CIO/CTO counterparts to rent rather than buy. That is just the building part. Once the construction is complete you have to operate that facility and it costs money too.

When you boil down what a datacenter does, it is pretty simple. A datacenter is a facility that turns electricity into bits, usually on a grand scale. The by-product of that industrial process, like so many other industrial processes, is heat. Power comes in, usually in vast quantities, and gets burned up by the silicon and rotating discs and transformed into bits, which exit the facility on the wires to be delivered to you, the consumer of bits. This makes heat, which has to be mitigated by mechanical cooling because without cooling the computers will fail faster. This very website resides on a server in a datacenter. This server runs 24 hours a day, and even when people are not reading these bits it burns up energy and makes heat all the day and night. Multiply that by millions, if not billions of servers running in every datacenter around the globe. Now layer on top of that the cooling systems to mitigate the generated heat and you’ll see how operating these facilities is very costly.

Now let’s ice this cake. Moore’s Law has a flipside: the more powerful you make a computer (due to that increase in transistors on silicon) the more power it will consume. Nowhere is this as plain to see and feel than within the datacenter. When a datacenter is built, it usually has a fixed amount of power allocated to it in the form of amps (or if purely AC power, perhaps in VA) which it can not exceed, this is its total electrical capacity. That power is then split, traditionally around 50/50 between “IT Load”, in other words the servers and the UPS gear, and “Mechanical Load” which is dedicated to cooling the IT Load. Watts are what is consumed by facility as it runs. By analogy, if you think about your car the amps/VA/capacity is the limits of the engine’s power, while Watts are the gasoline the car consumes over its lifetime. So building a datacenter is like buying the car, and the operating costs are what consumables it uses over its lifetime. Like automobiles, datacenters have seen increases in capacity and power over the last several decades. In the 1960s the datacenter was a large room with a few, or maybe just one large computer. The advent of the mini/micro/personal computers has transformed the datacenter. Now they are large buildings with thousands, if not tens of thousands of computers in them. The electrical capacities and densities of datacenters has risen exponentially as well. A decade ago 90 Watts per square foot was a high-end facility, now nobody would bother building such a datacenter. 300-600 Watts per square foot is common today, and higher densities are planned or even already complete. The tighter you pack the servers the hotter it gets inside the datacenter. Computers have become consumables themselves, and some are now building datacenters with minimal cooling, assuming that heat-related failure is just the signal that it is time to replace the server anyway. The building and the electricity are the true expenses and the computers are cheap commodities, just there to be used until they fail. Turning decades of IT thinking on its head! Maybe Dilbert was wrong?

Datacenters may be changing, but cloud computing doesn’t change datacenters or their economics. Cloud computing providers still have to build and run datacenters (or rent the space from colocation providers.) Those require capital expenditure. In order to pencil out economically, the cloud provider has to either charge their customers enough to pay for the build in a reasonable amount of time and cover the monthly operating costs, or oversell their capacity and hope it doesn’t bite them in the ass. This is why I’ve said the only reasonable current cloud provider business model is Amazon’s, which is based on excess capacity. Essentially the cloud customers contribute to Amazon’s datacenter ROI while they scale their own operations. It is a brilliant model. But anyone who is starting out as a stand-alone cloud provider faces a rough road to profitability.

All the world’s computing needs can not be collapsed into those “ten huge datacenters” my buddy heard about. The reality is that as industry, business, and society use more and more information technology there will be more and more datacenters. They will range is scale from re-purposed broom closets to giant campuses of warehouse-sized facilities. Many organizations have very specific needs that cloud computing may never be able to address, and for them there always has to be the choice of a traditional facilty…

Fallacy #4. Cloud computing can work for any IT need.
This is more of an inference to the “everything will move to the cloud” statement I hear so often. There are several IT needs that can not be solved with cloud computing. Meeting audit requirements is one. I’ve written about this fallacy before, and it caused a bit of an uproar. It seemed to be the first time anyone brought this issue up, and it became a hot topic in the cloud blogosphere for a short time. I felt vindicated when a cloud provider admitted what I said was true.

The basis of cloud computing is the same basis as web hosting: your data on somebody else’s servers. The same reasons that people chose to not use a web host apply to a cloud provider. Control of assets. Risks associated with overselling capacity. Support concerns. Interoperability concerns. There are literally hundreds, if not thousands of reasons why IT organizations and individuals would prefer to keep their data out of a cloud computing system. Most resolve to a single word, which is trust. That brings us to our last fallacy…

Fallacy #5. The cloud is secure.
Cloud computing is no more secure than any other form of computing, which is to say, not very. Or perhaps more accurately, as secure as it is designed, and managed to be. For an excellent analysis of data security in a cloud environment I highly suggest a read of Rich Mogull’s thoughts on the subject. Rich obsesses about all things security and does a far better job than I could in delving into the specifics. To his analysis however I’ll add a more encompassing, and less data-specific view of security that is more about trust and consequences than that of the integrity of the data itself.

The greatest hurdle to the widespread acceptance of cloud computing is trust. Trusting one’s data to systems whose location, condition, environment, and state of load are virtually unknown is a difficult thing to do. Many of these questions apply to any services (hosting, colocation, SaaS, etc) purchased online but the “cloudy” nature of cloud computing amplifies many of them beyond simple answers found in other scenarios: How well is that data protected? How stable is the company that owns the infrastructure? Is the datacenter owned and maintained by the cloud provider, or is it colocated is some other company’s facility? If the latter is the cloud provider keeping current on all it’s bills, or is their installation subject to suspension by their colo provider? What about bandwidth? Is the cloud provider multi-homed? Do they have geographic redundancy? What happens when the power goes out? Have they tested their generator(s)? How well are their power backup, network, and HVAC systems maintained? Is there anyone on-site if something goes wrong? What sort of SLAs do they have? What happens to our data if the cloud provider goes out of business? What sort of security is in place to monitor their customers? What happens if somebody else on the system(s) we’re using is a spammer? How do they handle blacklisting? Could AUP violations by other customers impact our operations? Are assigned IPs SWIPed to customers, or does everything track back to the cloud provider? What happens to our data when we scale back usage, or cancel our service? I could go on and on.

Many of these issues resolve to how much can you trust your cloud provider. Trust takes a long time to build. Most of IT is fairly critical corporate data and infrastructure, so it may be some time before trust is built up enough to move much of this sort of data to cloud deployments. Trust can also evaporate almost instantly once it is lost, so all it will take is a single high-profile cloud-related failure to put all cloud business at risk.

Now it may seem that I’m somehow “anti-cloud”. Nothing could be farther from the truth. It is a sensible method for provisioning computing resources on demand, and fulfills a very real market niche. I just do not believe that it is the answer to every IT problem, nor is it the future of IT, only small portion of it. Cloud computing will expand the market. I can envision a very near future where companies use a hybrid of traditional dedicated datacenter resources with cloud deployments to extend, replicate, or expand as demand warrants. The cloud is indeed a new paradigm, but it lacks the underlying “shift” that alters the entire industry around it. The pundits should sheath their hyperbole and focus on what cloud computing can do for people, rather than what it will do to the marketplace.

Defending The Data Center… from WHAT exactly?

absurd or plausible? I think the former.

Defending The Data Center – Forbes.com.

This “datacenter as terrorism target” meme has to die. Seriously. It clouds (pardon the pun) the real issues of physical and network security in our industry. If you have to seize a hot button topic like “terrorism” to communicate something important (yet completely unrelated) then you are not communicating properly.

I’ve written about this previously but it bears repeating: Datacenters are genuine parts of the first world’s infrastructure, but infrastructure is never the target of terrorism. The minds of people are the target, and in the case of 9/11 infrastructure was the weapon and symbols of capitalism and government were the targets.

It is far too expensive and time-consuming to attack infrastructure. Infrastructure only becomes a target in times of war between nations. If we’ve reached that point, then we have much larger worries. Meanwhile the realistic focus should be on criminals, infiltrations & DoS attacks (which the recent attacks on Twitter & facebook mentioned in the article actually were!) and perhaps competitors (aka industrial espionage) long before we start throwing terrorists into the mix of threats to datacenters and their contents.

Enderle Idiocy, Schneier Wisdom: “Terrorist Risk of Cloud Computing”

Schneier on Security: Terrorist Risk of Cloud Computing.

Bruce Schneier gets it COMPLETELY right, (about Rob Enderle being completely wrong,) when he says:

“…the main point of the article, which seems to imply that terrorists will someday decide that disrupting people’s Lands’ End purchases will be more attractive than killing them. Okay, that was a caricature of the article, but not by much. Terrorism is an attack against our minds, using random death and destruction as a tactic to cause terror in everyone. To even suggest that data disruption would cause more terror than nuclear fallout completely misunderstands terrorism and terrorists.”

There is a common logical error people make when trying to asses risk: planning without thinking. Making invalid assumptions without proper analysis. Nowhere is this as obvious as when people discuss protecting things from terrorist attack. Terrorism ignites all manner of fear in people, even without the “terrorists” having to actually DO anything. Fear is indeed the mind-killer here as people toss away all logic and let their imaginations run wild, conjuring up all manner of fearful outcomes. They literately lose their minds and lose the ability to think clearly.

Of course Rob Endlerle is a proven idiot and is obviously incapable of thinking. He merely lobs grenades and trolls for flames wherever he writes, always constructing bizarro arguments on assumptions and fallacies. Schneier rightly points out one of these fallacies when he scoffs at Enderle’s statement: “The Twin Towers, which were destroyed in the 9/11 attack, took down a major portion of the U.S. infrastructure at the same time.” The U.S.A.’s infrastructure suffered virtually zero damage on 9/11. In the grand scheme of things the 9/11 attack was less than a pinprick in our national skin. The air transport system was back to normal within a week. The stock exchange was trading again in a few days. More people die falling off ladders each year in the USA than those killed on 9/11/2001.

The point of terrorism is found right there within its name: terror. Shock. Outrage. Fear. Paralysis. Over-reaction. That is what terrorists want. Their aim is to provoke maximal emotional reaction with minimal effort. Therefore terrorists attack specific targets chosen for maximum shock and outrage. They attack symbols. They attack people. They seek to have visibility. They don’t attack infrastructure. In the case of 9/11 infrastructure was the weapon, not the target.

Nation-States engaged in warfare attack infrastructure. The fastest way to disable an enemy is to destroy their means of communications, transportation, and manufacture. This is how warfare has been conducted since the mid-20th century. Technology allowed the expansion of the battlefield into entire continental “theaters of war” and technology allowed warring nations to attack each others’ technology. This is the natural evolution of conflict that began when our ancestors first beat each other with rocks.

The error that Enderle, and so many others make is mistaking terrorism for warfare. Terrorism is NOT warfare. The purpose of attacking infrastructure is to weaken the opponent so as to make warfare easier. The destruction of infrastructure allows the next logical step in warfare: the attacker destroying their enemy and/or invading their enemies territory. Terrorists are not interested in those steps. They are not seeking to invade or destroy. They merely want to inflict maximum emotional damage at minimal cost. Osama bin Laden spent very little money to execute the 9/11 attacks. Sure, it may have been over a million dollars but it provoked a trillion+ dollar response. THAT is the point of terrorism.

Datacenters, Telecommunications Infrastructure, Carrier Hotels, Long-Haul Fiber-Optic Circuits, and by extension, “Cloud Computing” will never be terrorism targets. Ever. They have no emotional value. Their disablement or even destruction provokes no visceral emotional reaction or outrage (except in the people like myself who must build and maintain them of course!) Ask yourself this: If the 9/11 hijackers flew those planes into One Wilshire, The Westin Building, and the Google Datacenter in The Dalles, Oregon would we be fighting wars in two middle-eastern countries today? The answer is: “No.” In fact it may not have even been seen as a terrorist act at first, instead being seen as a random set of accidents. It would not have been seen live on TV around the world, and people would not have even been affected much technically and certainly not emotionally. Today it would be one of those dimly recalled events of yesteryear. “Oh, remember when those plane crashes made the Internet slow for a few hours?”

Western Landscape with E-type Bonnet

I’m trying to work from home today, but instead am wrestling the technology that allows me to do that. Namely my VOIP phone system and VPN it rides on are acting up and driving me crazy. So I’m seeking a little peace as I pause in the struggle. What better way to calm nerves than to gaze upon a serene and inspiring landscape? This photograph was taken from a pause in the ascent of the Going To The Sun Road in Glacier National Park. I can’t recall exactly where, but from the looks of things it is still pretty low, but beyond the big switchback after the tunnel.

ahhhh.. I feel better already.

A milestone reached: A MegaChuck of output!

Above: A peek behind the scenes...

Back in September of 2005, right after I came back from the Colorado Grand, I switched from building all my webpages by hand to running WordPress. Prior to that I had written all the HTML code one character at a time in Rich Siegel’s wonderful BBEdit and dropped them on my server (a shockingly underpowered machine!) located at digital.forest’s datacenter. It was honestly a huge pain in the ass, and I rarely updated the site because of this. Updates usually only happened in the midst of some important event, such as driving from NYC to LA with a bunch of other old cars, or a wonderful road trip with Nicholas as we brought the Jaguar home for the first time. I’d been using the “MoveableType” content management system (aka “blogging platform”) at work for our support website, so already had an idea of what I wanted. WP looked to be the one to use, so I set it up on one of the web severs at the office and started putting in content. Along the way I’ve picked up a nice group of folks to chat with… several hundred of you actually. Some knew me before I started, quite a few have found this place since. A lot of you have hung around and really participated. Thanks!

I noted today that I’ve reached something of a milestone with this post: the 1,024th one since I started using WP to publish my photos, thoughts, confessions, news, and occasional maniacal rants. One thousand and twenty four. That’s a magic number for us geeks at it is the nearest we get to counting to one thousand, though it only takes us twelve numbers to get there. I figured I’d celebrate the milestone by sharing a few thoughts I’ve had about what I do here. Just as I said from the outset, I’m not looking to be a well-known pundit, or a vaunted member of the “blogosphere” … I just want to develop and present good stuff that rattles around in my head and eyes. In random order, here are some thoughts:

  • Re-running old rally stories. The idea here is to repost some of my old (pre-blog, so 1998-2005) vintage rally stories, but this time with the ability to flesh out the tale a bit more. Often these were written in a summary style, late at night after a hard day’s driving, followed by dinner (with drinks!), lots of photo editing and uploading from dodgy hotel Internet connections, and written while my rally partner was snoring in the other bed. I’d re-write them and post them in a daily order.
  • Interviews with other “car guys.” I’d love to develop a series along this line of thought: Talk to people who self-identify as “car guys” (no matter their gender) and find the common threads as well as the differences. Get their stories, histories, etc. The origins of their love for the automobile, the cars that got away, etc. I already know so many people I could talk to… literally around the world.
  • Some more of the same. The tried and true: Rally & Road Trip stories in Real Time. Car Photo of the Day. Engine pR0n.
  • Some Whimsy in the Mix A bit of story telling. A sprinkle of time lapse photography (I think I can get uber-HD time lapse stuff from my new DSLR!) More antique computer stuff if I can find the time.
  • A new WP theme.That is, a change in the layout of the site. The content will remain, but the look will change. I always meant to move off the default “WP Classic” theme, as it is… dull. I never got around to it. I actually have another site where I’ve played with WP themes (don’t bother looking for that site… it is very anonymous and has nothing to do with me. It is just a place where I practice writing for writing’s sake) I think I’ve got a look worked out and if I can find the time I’ll implement it here. If you are some sort of Luddite and actually LIKE the way this site looks currently, let me know. 😉
  • Some behind the scenes stuff. This is mostly server-related. Most of my photos are still hosted and served from my shockingly underpowered machine (seriously, it is a 266MHz G3!) while the WP site runs from a d.f FreeBSD shared hosting box, and the database is running on yet another d.f shared hosting server. I plan to collapse all those back down to a single machine… this time only mildly underpowered. Having the db and the http on the same box will let me do a few whizzy back-end things. No change for you, except maybe it will be a tad faster.

Feel free to comment and let me know what you think.