Slashdot | Price of Power in a Data Center

Slashdot | Price of Power in a Data Center

Interesting read, on a subject I know pretty well. We will likley have to institute power surchages to our colocation customers soon.

I liked this interesting tidbit from the comments section:
Also the street price for a 20A circuit in a datacenter is $200-$300, while the cost of a megabit is $100 or less. So a rack of servers that requires two power circuits and pushes 3Mbps (not an unusual scenario) costs twice as much in power than in bandwidth.

I’ll write more about this subject soon.

Keychain Access Hurdle Cleared!

I have a quote in my email .sig file, and it is likely in this blog’s “random quote” database as well (just keep hitting “refresh”) from one of my staff. It goes like this:

There’s only so much stupidity you can compensate for;
there comes a point where you compensate for so much
stupidity that it starts to cause problems for the
people who actually think in a normal way.

-Bill, digital.forest tech support

“Bill” in this case is Bill Dickson, my highly valued, and true treasure of a sysadmin. You can see his blog (WRD) listed in my blogroll, though I swear he is going for a world’s record for NOT updating his blog. He’s close to a year now.

Anyway, that rather insightful quote sums up what is going on with me and my keychain. I vented on a couple of mailing lists and was informed (by some well-informed people both inside and out of Apple) that Apple originally designed the Keychain system to work just as I was using it. With it being independent of the login password and flexible enough to allow people to use multiple keychains however they wished.

The user community apparently bitched and complained a LOT to Apple that they didn’t like the fact that they were independent of each other and that a change of the login password SHOULD also change the keychain password. I guess the majority of MacOS X users DO keep their login and keychain passwords the same. Me? I think that is stupid. I guess a lot of software engineers inside Apple thought it is stupid too. It became something of a fight between engineering and marketing (isn’t it always?) and engineering finally lost with 10.4.

So Apple caved and compensated for their customer’s stupidity, and ended up burning not-stupid people like me in the process.

Oh well. If you are a software engineer at Apple, the phrase “Asshat at Apple” I used in yesterday’s rant was NOT directed at you. But feel free to assume it refers to the people who forced you to make that change in the default behavior.

Speaking of forcing… I got my password back. It seems the “please select a longer password” dialog box is a placebo. If you just keep force-feeding the Keychain Access utility your unapproved password, it will accept it. Go figure.

I’m happy as a clam.

Whisky Tango Foxtrot Apple Keychain Login Sync

On a whim, in MacOS X 10.4, because I was tired of my old login passwd, I changed it. No biggie, right?

I was presented with a dialog, basically saying “Your keychain password has also been changed.” …huh?

Bahhhh!!! No! I didn’t want that!

Grrr… go stomping off to the Utilities folder for the Keychain app and find buried in a preference somewhere a CHECKED radio button saying “synchronize with login password” or something similar.

WTF??

I NEVER checked that box. I have always… always had different login and keychain(s) passwords. I would never dream of making them the same. What asshat in Cupertino decided the make this choice for me?? And HOW can I change my default keychain password back to what I want it to be (perhaps I missed the obvious in my blind rage?)

I REALLY want to kill somebody at Apple right now. What a bonehead thing. No warning beforehand, just changed something very personal and important. Of course I have UNchecked that box now, but it is too late. I’d like to change my muscle-memory-embedded keychain passwd back to what it should be. If I missed some obvious design change, announcement, READ ME file, or clearly marked option that lead to this situation, feel free to point it out to me. Otherwise bring me the head of the idiot who dreamed up this stupid default action in MacOS X “Tiger”.

Maybe this is “normal” now that I think of it… I may have always changed my password at the CLI prior to this… I think I better go eat dinner and calm down.

Update: After I calmed down a bit, I went back into Keychain Access and found the obvious menu choice to change the password for the keychain. No need to clue me about this. BUT, let me tell you about keychains, and how I use them. I have a login password. I have my laptop set to require both a username and password at login (IIRC OS X defaults to a list of usernames, with only the password field blank.) I also require my login password for waking my laptop from sleep, or to get past the screen saver.

Keychains, to my mind, are completely different from login passwords. The keychain is where you store all your various passwords for all those email accounts, web servers, stupid blogs like this one, etc. I actually use several keychains. I have my default keychain, which is where I store the most frequently used, but in no way terrifically important passwords. The the passwords for this stupid useless blog, or the shared IMAP boxes we use at work to read generic email addresses like “support@forest.net”, ” abuse@forest.net”, etc… you know the ones listed in whois that get more spam than real email. I have several other keychains, and these store progressively more secure data. Access passwords for ARD, Timbuktu, SNMP strings, specific personal passwords and data that I prefer to keep secure… and then finally there are some passwords I just won’t keep stored anywhere but my brain, enable passwords for BGP routers and other network devices, root passwords for our DNS and mail servers, etc.

Every keychain has a different password and they get progressively more complex with the level of security required for the keychain data.

My default keychain has had a four character password. Mind you, it isn’t a word, or even anything logical. It is a random string of 4 characters from 3 different rows of the keyboard. I have been using this password, and simple variations of it for 15 years. It takes me about a nanosecond to type it. It is so deeply ingrained in my muscle memory that I can bang out those four keystrokes and hit return, even with my clumsy two-finger typing style in less time than it takes me to type any other 5-character string imaginable. This is WHY I use it for mundane, default keychain access… clickety-click!

But NO. Some Apple asshat, probably the very same asshat that decided to default the login/keychain sync, decided that THEY decide what level of security my default keychain should have:

Just fsckin bite me!

If I feel the need to hide my lame, low-security password to my slashdot login with a 4 character password, then LET ME. I am an adult, I understand that somebody could steal my laptop and run a crack against my default keychain and probably crack it is a few minutes. Big deal. So they’ll have access to some stuff that I have already decided is low-security which is why I prefer to keep access to it EASY rather than hard. If I want to be an idiot, LET me be an idiot. Please. I’m fine with risk concerning this particular data. sheesh!

So if anyone knows some clever way around this stupid limitation, please let me know. I have no problems doing it on a command line, been there, done that since the Bush the Elder was borking up the economy.

Until then, I will be tripping over my fingers typing three extra characters to get to slashdot, spam, and this dumb blog. sigh.

Rising costs, fixed prices.

Pondering pricing in a post-economic downturn era.

One of the services we offer at digital.forest is data backup. We have four backup servers that run scripts to backup data from “clients” (that is, servers, owned by clients of digital.forest, running “client software” for the backup server. Got it?) Like all services here, we haven’t changed the price of backup in well over 4 years.

Right before the “dot com crash” (which wasn’t a “dot com crash”, but I’ll explain my views on that some other time) we actually performed a large-scale review of all of our offerings, what the competition was charging – and started adjusting our prices accordingly. At that time, we were a small-scale operation with a set of niche offerings. The only price hike we managed to complete prior to the economy’s turn was on FileMaker hosting. We are the largest FileMaker database hosting operation in existence. At one time it was a growing business, but between 2000 and 2004 it seriously stagnated… more due to FileMaker Inc (FMI) taking far too long to rev FMP 5 (again, I’ll have to leave my views on FMP and FMI for another post.) We saw a lot of our clients migrate to PHP/mySQL solutions from Lasso-or-CDML/FMP solutions. Hosting FMP databases is a very expensive business to run since it requires more resources – more software and servers per customer that just about any Internet database offering I can think of. Odd considering that it is considered a “low end” database solution. So it made sense to raise our prices, especially since FMI kept raising theirs. If I recall correctly we raised them about 10-15%, but only lost about 1% of our clients due to the price hike. That was an interesting exercise in Capitalism. Too bad the economy, other database products, and FMI’s slow work on what eventually became FMP 7 managed to wipe out 40% of our FMP hosting business over the next four years.

Thankfully other offerings filled the gap. Server colocation became a significant part of our business. We had built a pretty nice little datacenter by 2000. It was small, but had almost everything you would expect to find in a large-scale industrial datacenter, just on a small scale. It was basically some converted office space in Bothell, but we had a great backup power system, and multiple fiber lines coming into the building. We were an autonomous network with BGP4 connectivity to several major Internet “backbone” providers (I hate that term, but I’ll use it here for simplicity.)

In 2001/2002 when large colo providers were going down every week, or consolidating datacenters, we went from being viewed as “risky because we’re small” to being “safe because we’re small.” Another thing that happened at the same time, and continued well into 2004, is that prices plummeted. Webhosting rates fell by 60% or so, and server colocation fell through the floor to unsustainable rates. I remember in 2000 Exodus charged anywhere from $4000 to $8000 a month for a single rack. We charged $2000, which was “cheap.” Within two years the “big boys” (which in Seattle meant only InterNAP and a few remaining operators) were practically giving rackspace away. I remember losing an 8-rack deal to InterNAP in early 2003 when they lowballed the price to something insane like $250 a rack. It was obvious they were floating on investment capital, had a big huge brand new (but mostly empty) facility to fill, and knew that any revenue was better than no revenue. We have never been big enough to operate like that. Our colo prices have come down though, right along with the rest of the industry. No, you can’t buy a rack from us for $250, but we have gone from being a “value priced” provider to being about the same as everyone else, if not a little high. I’m OK with not being the cheapest, mostly because we offer what so few providers can’t, personal service. We are a niche player, not a commodity one.

Today we are still here, still growing, and overall doing pretty good. We moved into a new facility (ironically one built by a failed competitor) and now actually do have a top-tier facility in every way. Unfortunately the costs of operation have grown at the same rate as our growth, and we have basically kept our level of profitability all along (if you were to pool our total profit over the past three years you could buy a small Korean sedan.) We at least are marginally profitable, unlike so many in our industry. We’ve done it by taking advantage of every cost savings we could find (in bandwidth, equipment, etc.) and keeping the rising costs (electricity, storage, people, etc.) as under control as we could.

So our prices have either stayed where they were in 2000, or in many cases, gone down. One price that has been frozen is data backup. Back in 2000 we charged $30 a month for data backup. Back in 2000 your average web server had maybe 250 megabytes of data, with 20 megs of that changing on a daily basis (usually database dumps.) We were running a VXA tape library with a 15-tape capacity, and our other two backup machines ran single drive AIT tapes. So at $30 a month we were covering the cost of the tape autoloader and probably making a buck or two per client until the cost of the library was covered. I doubt it ever was because by 2002 we had to start backing up to hard disks. Why? There just was not enough time in a night to backup to tape anymore. Our backup window kept increasing until we were backing up during non-night hours. When our backup software started supporting backups to HDDs we jumped on it and started buying the biggest disks we could (at that time around 100GB) and using them like tape – chew them up and throw them out. When drives got bigger, we bought bigger drives – 120GB, 180GB, 200GB, 250GB. Of course, so did our customers, so we were rarely able to stay ahead of the time/capacity curve.

Apple shipped their XRAID drive array a couple of years ago, and we have purchased a few since to add to our arsenal of backup and storage devices. We sell space on one for clients, but use the others for backup media.

About three months ago I cried “uncle”… Here we are, spending tens of thousands of dollars to maintain a service we are making a few thousand dollars a year on. We’ve fallen into a similar trap our competitors did when they dropped colocation prices in 2002… only this time we didn’t raise our prices to at least match the cost of the service provided.

We are using close to 6TB of storage, and backups now run 7/24. Any pause for a data restore puts us in a position where we play catch-up for several days. Clients complain about missed backups (your server too slow? sorry, we have to skip you); clients complain about backups happening during business hours (OK, we can put you in the special “nighttime” script, but no guarantee that we can back you up every night); clients complain about the time it takes to back them up (let’s do the math… three 250GB volumes of mostly uncompressed and non-compressible data, over a network at around 250-300 MB per minute… that is almost two days!)

The client who has a small server with a few hundred MB of data? They are still paying a reasonable data backup price at $30 a month. The client with more than 50GB of data (and we have some with >TB of data) THEY are getting way more service @ $30 a month than they can imagine, even when we skip them or miss them entirely a few times a week.

It is obvious that we have to implement a pay-for-what-you-use data backup system, and that is what we are about to do next month. It could not come soon enough for me.

The Rains Have Returned

I don’t comment much about weather, but the subject came up in an iChat with a friend on the east coast.

Here in the Pacific Northwest we really only have two seasons, “wet” and “dry”. “Wet” lasts from sometime in September or October, until early July. Dry lasts from early July (usually the 5th or 6th!) until sometime in September or October.

In the 1980s “drought” years “dry” would sometimes last until November. I remember climbing “Outer Space” on the Snow Creek Wall over a weekend in early November around 1987 or so. It was cold at night but very warm… “hot” even during the day. Recently our weather has been unsettled, with either VERY wet years (in 1999 we didn’t really have a “Dry” season… until September. Last winter was the as dry as “wet” can be, with hardly any snow in the mountains and very little rain down here. Oddly enough the past few year’s “wet” started big, with some big October storms … these pictures were taken two years ago today. But then settled into a “very sparsely moist” rather than our usual full-on “wet”.

Well, the “wet” has returned to the Pacific Northwest. October has been more rainy than clear, and quite chilly as well. We had a brief little “Indian Summer” the past two days, mostly sunny temps in the high-60s F. Friday I drove the Jag down to the body shop for the bonnet ding to get repaired, and yesterday was spent hacking back the grass since the sun was out (making hay while the sun shines as it were.) Today however reinforces how brief that nice respite was. Rain, mist, fog, temps in the 40s & 50s F.

It will be this way, relentlessly wet, with only high winds and storms to break the monotony from now until January when the “storm season” ends, and then it will just be plain old rain. You can basically say “Rain, mixed with showers, with a rare sun-break, lows in the mid-30’s, high’s in the mid-50’s” if you were a weatherman from now until March. Sure, we’ll have a few snows sprinkled in there, and of course that one week in January or so when the sun comes out… just to keep us from killing each other. Sometime in April we’ll start to see more sun, and warmer temps, and the Jag will come back out of the barn now and then. Until then, you won’t hear me talk about it other than winter-time projects. (Like my plan to perhaps do something about the radio console once and for all.)

When the “Dry” season returns, I’ll comment about our reward for the crappy weather we put up with around here, until then, I’ll try not to say much about weather.

Stop me somebody…

before I buy this.

It is close by. It is cheap. It is legendary for reliability – one was the world record holder for mileage: 1.9 million miles(!). It is of course a Diesel. Not exactly the 300SD or 300SDL I’ve been looking for, but still. I bet it does 0-60 in at least 30 seconds! (downhill, with a tailwind!)

Update 10/23/05: I managed to show some self-restraint and didn’t bid. Final price wasn’t too bad. I did some reading on the model and it was the first really successful Merceded-Benz Diesel for export. It has ZERO collectible value though.

WVO Filtering Setup

I finally photographed the home-brew Diesel setup I built:

WVO Filter

The waste veggie oil goes in the top barrel. The sawed-in-half gas can acts as a funnel. The oil comes to me via the white 5 gallon buckets on the left, so they don’t pour into 2″ holes without a funnel. The gas can sits very well in the filler bung, so I won’t spill… too much. The top barrel has a bung in the side, through which I have a 3/4″ ID gasoline hose, which goes through a stop-valve, to a 15-30 micron filter, through another stop-valve, through a 5-15 micron filter, through the final stop-valve to the bottom barrel. The bottom barrel is equipped with a nice 10 gallons per minute hand pump. The whole setup is airtight, and is kept from freeze damage (thankfully only a slight possibility here in the Pacific Northwest) with one of those pipe-warming cords that winds its way from the top to the bottom. You will note the bottom barrel has a blanket around it for extra insulation. If we get a significant cold snap I’ll have to supplement the heat with a light bulb or something.

This whole section of the barn was built by the previous owner specifically to store Diesel fuel. The shelf there had four large fuel storage tanks on it when we first viewed the house. The guy owned a logging company and the barn was his workshop for the trucks and equipment. The floor below it is not on the concrete slab, but it is filled with gravel and oil-absorbent stuff. Pretty cool.

I only have about 5 gallons in the upper tank right now. It should start flowing on its own via gravity once I have about 20 gallons in the upper tank. Running the hand pump will also provide suction back through the system to boost the filtering. Can’t wait to get it running at capacity.