Cell Phones as a Driving Distraction

I hate to talk on my cell phone while I’m driving. In fact, I hate to talk people on their cell phones when they are driving. Driving a car requires significant amounts of attention. You can not pay attention to driving while also talking on a telephone. Many people will argue that they can, only because they have yet to die in a fiery wreck, and they talk on their phones all the time while driving. I would argue the opposite. On my commute I swear 3 out of 5 drivers have a phone plastered to their head. I have swerved to avoid many inattentive cell-phone yakker drifting over the lane lines on the freeway. It was only due to the fact that I wasn’t distracted, and I was aware that they were, that I avoided collision.

I haven’t been able to adequately argue why you shouldn’t talk on a phone while driving. But somebody else just did it for me.

What follows is probably the most well-considered arguments for why talking on a cell phone is so much more taxing on your brain than other distractions one encounters while driving… It was written by my friend Adam Engst of TidBITs (also a digital.forest client!) as part of an ongoing discussion on a mailing list.

The first bits are quoted remarks from the previous discussion, the rest is all Adam:


(all the reported studies say that the distraction from the process of talking on the phone is as dangerous as the distraction from dialing the phone and holding it).

Are these distractions any more than having a passenger in the car and talking to them? How about the distraction of talk-radio?

I find myself in agreement with the studies that talking on cell
phones while driving is highly distracting, and significantly more so
than talking to another person in the car or listening to talk radio.
I base this somewhat on personal anecdotal experience, but largely on
what I learned while ghost-writing the late Cary Lu’s “The Race for
Bandwidth” book.

The problem is basically that a cell phone conversation is a very low
bandwidth communication channel, with significantly less bandwidth
available than for POTS (plain old telephone system) calls. That’s
why calls break up, voices are hard to understand, and so on. And
even when the voice on the other end is clear and continuous, the
audio range is significantly limited.

Now, whenever you’re faced with a difficult-to-interpret audio
signal, your brain responds by doing a great deal more processing. If
someone you’re speaking with isn’t speaking clearly, for instance,
you’ll look more intently at their face, in essence adding visual lip
reading to what you’re hearing; your brain combines the information
so you can better understand what you’re hearing. With cell phone
conversations, it’s common to see people plugging the ear not being
used for the phone to block out distracting external noises; in
essence, they’re subconsciously trying to devote more brain power to
decoding the cell conversation. I’ve even found myself closing my
eyes when trying to distinguish particular words that are difficult
to distinguish.

As a result, it simply makes sense that if your brain is being forced
to do a great deal of audio processing, it will have somewhat less
attention for driving. I’m sure people can learn the skill of driving
while talking on the phone – repetition will improve nearly any
activity – but I have no doubt that talking on a cell phone is a
notable distraction for many.

What about the situation where you’re talking with someone else in
the car? There are two huge differences. First, the amount of
bandwidth is huge – the audio quality of someone sitting next to you
is many times that of a telephone call. Second, and more important,
if the person in question is an adult, they can (and usually will)
adjust their speaking to the driving conditions. An aware companion
will stop talking if the driver needs to navigate an unfamiliar area,
or if there’s a traffic hazard approaching. Driving with an unaware
companion, such as a screaming baby, would thus be much worse.

How about the radio? Again, the bandwidth is generally higher, and
the audio quality generally improved by being sent through car
speakers. But what’s key with radio is that it’s a one-way
transmission. You must still process the incoming audio, but there’s
no need or expectation that you’ll reply, and the informational value
of the content is generally low. In other words, you can tune out the
radio to concentrate on driving for seconds or minutes with no
downside. And of course, you can always shut it off – you’re in
complete control of the one-sided conversation without even the need
for social niceties (it’s rude to just hang up on someone, but no
radio host is bothered if they’re turned off :-)).

So again, with the acknowledgement that anyone can practice talking
on the phone while driving to improve their driving-while-talking
skills, it seems quite clear to me that it does detract from
attention paid to the road, and more so than either a companion in
the car or listening to the radio. Improving the physical situation
by using a headset and voice dialing rather than holding and dialing
the phone will also help, but only so far.

cheers… -Adam


Well said Adam!

The old Raised vs. Solid debate.

Note to my usual readers: This is something I wrote a few days ago for a technology blog I occasionally write for. They haven’t posted it yet (I think the staff are out on holiday… unlike those of us that never stop working!) so I thought I’d post it here. Let me know if you think it can use some edits. It is deliberately lighthearted, as I was trying contrast the overly dry style of the white paper I was commenting on. Let me know what you think.


I just slogged my way through Douglas Alger’s 5-page excerpt from a Cisco Press White Paper purportedly discussing the merits of raised floor versus non-raised floor designs for datacenters. It spends four paragraphs of the first page telling you why overhead distribution on a solid floor is not good, then rambles on for the next 4.5 pages telling you all about raised floors. It appears by that fact, and from several statements by the author sprinkled throughout the paper, that he has a strong preference for raised floor. Some of his statements about overhead infrastructure are just plain wrong, or easily mitigated. Perhaps he’s never even managed a solid floor facility? So much for a thorough analysis!

Given that I am involved in the management of two facilities, both designed at the same time, but one using raised floor and the other a solid floor with overhead infrastructure, I feel like I can present a more balanced viewpoint. I agree with most of what Mr. Alger says about raised floors, both their strengths and weaknesses. He neglects a few glaring issues with raised floors, and highlights a few of their annoyances quite well, such as tile/cabinet drift. What Alger fails to do is explore the benefits of a solid floor datacenter; therefore let me lay those out for you:

Floor Load
Alger is living in the past when he talks about “heavy” racks weighing 1500lbs. In today’s high-density reality, 1500lbs is a lightweight installation. The average installation we are seeing in our facilities today is 1800 lbs. We have several cabinets that exceed 3000lbs! I don’t see this trend changing any time soon. When people have 42RU to use, or to put it more bluntly, 42RU that they are paying for, they are going to stuff it with as much as they can. This is where a solid floor really shines above raised. Got a big, heavy load? Roll it on in and set it down wherever you please. No ramps to negotiate, no risk of tiles collapsing and your (very expensive) equipment falling down into a hole.

Stability
Steel reinforced concrete slabs don’t rattle, shake, shift, or break, …at least under normal circumstances. If your datacenter is located in an geographic region known for what I like to call “geological entertainment” your datacenter is likely better off with a solid floor. You can solidly secure all your infrastructure to a solid concrete slab far better than to a raised floor. The stress, shaking, and shuddering of a seismic event can displace floor tiles. The last place I want to be in an earthquake is in a raised floor datacenter… tiles popping, racks swaying, and the whole floor structure wobbling around underfoot does not make for a confidence-filled rollercoaster ride. I’ve been inside a solid-floor facility in a 7.1 earthquake; the overhead ladder-rack and server racks all moved in unison, creating an eerie wave, but the floor remained solid throughout, much to my relief.

Calculations of point loads and rolling loads become irrelevant, except for maybe your UPS gear if you are off the ground floor of your building.

Fire Suppression
Fire suppression technologies in today’s datacenter focus on isolation of smaller zones and release of a clean agent to extinguish the fire in that area. If you have a raised floor you instantly double the number of zones you must monitor, and deploy fire suppression systems into. The server spaces as well as the plenum spaces. Zone isolation is achieved through dampers in the air handling system and solid walls. These are trivial to build and secure in a solid floor facility. Air supply and return plenums and ductwork can have automatic dampers driven by the fire suppression system. Try that in a raised floor environment of any scale and it is prohibitively expensive and in some cases just flat out impossible. In the facilities I am involved with the solid floor datacenter is protected by FM-200 and Ecaro-25 fire suppression systems throughout its entirety, whereas the raised floor datacenter’s fire suppression is limited only to the UPS rooms.

Datacenter fires are unlikely, but the presence of suppression systems is a requirement for some users of datacenter facilities. If datacenters are kept clean, dust-free, and combustible materials are kept out (almost impossible as the presence of servers is a guarantee of cardboard proliferation!) then risk of fire is low, but it can not be completely eliminated. The under foor plenum spaces are a magnet for the collection of dirt, dust, loose change, and various bits of paper, cardboard, etc. I’ve never seen a raised floor plenum space that wasn’t dirty after a year or so of installation. How many of you have seen fire suppression extended to the plenum space under the floor? What good is it to deploy in one part of the datacenter and not another?

Cleanliness
The above point leads directly to this one. Datacenters should be very clean environments. Solid floor facilities are much easier to maintain to a very high standard of cleanliness. Raised floors are not. Periodic removal of all tiles is required to clean the plenum spaces. This not only is a messy hassle, it also reduces the effectiveness of the cooling systems during the maintenance interval, it also exposes your cabling infrastructure to risk of damage. My car always needs washing, and my wife will tell you I’m a slob, BUT my datacenters are clean enough to eat off of… but don’t even THINK of bringing food or drink into one of them! I can stand in my solid floor facility and visually scan for dirt and dust with the efficiency of The Terminator. Not so with a raised floor. Unless it was installed yesterday, all manner of dirt, dust, and debris lurks beneath every raised floor used in actual production. The raised floor advocates will try to deny this, but no raised floor will pass the repeated scrutiny of a white-glove test.

Raised floors also provide a false sense of order. If a single cable is out of place, or some rat’s nest of shameful cabling lies beneath… it is hidden. No difference to the casual observer. The CEO that tours through once a year may not know whether it is the one cable or the rat’s nest, but YOU will… and YOU are the one that has to manage it. Every production facility is under constant change management, and if things go unchecked for even a little while what started as a well-ordered cable plant can turn into a rat’s nest pretty fast. Tracing cables under floor tiles is one of the biggest pains in the posterior any datacenter manager has dealt with. I have found that with all the infrastructure in plain sight, keeping it in order is at least easier. There are no surprises lurking when everything is in plain sight.

Density and Growth
The reality of high-density computing is that the datacenter must be able to support far more cable, power, and number of servers-per-rack than ever before. The days of eight 4U servers, a patch panel and maybe a few bits of 1U network hardware in a rack are long gone. Todays racks each need hundreds of cat-5 ports for multiple NICs, various storage connections, etc, room for forty-plus 1U servers, or maybe even a half-dozen blade chassis, and enough power to drive a Tesla Roadster from San Francisco to Seattle. If your raised floor was built even as recently as five years ago there likely just isn’t enough space in your plenum to handle that much cable anymore, at least not without seriously compromising your airflow. Once you build your raised floor, you are locked in to that design. You must peer far into the future and assume infrastructure needs way beyond what is expected today. With a solid floor and overhead infrastructure, you can keep adding network and power without any compromise to cooling or air flow. Those two facilities I work with of either type? The raised floor one has hit the limit of what it can power and cool, based on a seven year old design, but it stil has empty spaces that will remain unused, forever. The solid floor facility is currently being expanded, while still remaining on-line and operational. It will soon be capable of more than double the Watts-per-square-foot its original designers planned for in the year 2000. It’ll be able to pack every rack full to 42U. The cooling system, which originally was giant air-diffusers up in a 15′ ceiling are being modified with ductwork to concentrate cold air right in front of each rack, with hot-air return plenums being routed out of the hot aisles and back into the the HVAC system on the roof. The ladder rack cable trays are not even at 20% of their capacity. This scenario is not possible with raised floor datacenters, unless you can shut them down for a complete overhaul.

Access
Contrary to Mr. Alger’s claim, every solid floor datacenter I have worked in has had power and network terminations within reach of an average sized human being, no stepladders required. In the current solid floor facility I manage, the ladder rack is substantial enough, and the ceiling high enough to enable workers to walk on the structure itself. Ladders are only needed to ascend to it, once up you can walk around the entire facility quite safely, nine feet off the floor. The only time one needs to go up there is to install new cabling, or access the HVAC ductwork, which is rare. Working beneath the floor tiles by comparison is a miserable chore.

Having worked in both environments over the years, I’m leaning towards avoiding raised floor in the future, and sticking with solid floor facilities. To me raised floor stands as an echo of older days, when “The Datacenter” contained a handful of mainframes, a minicomputer or two, and men with white shirts and pocket protectors loading tapes and sitting at terminals. Entirely raised floor design just does not effectively scale to the density needs of a modern facility. I have seen hybrid facilities with raised floor plenums used solely for cooling and overhead ladder rack for power and network delivery, and that seems like a good compromise to me. But the overall benefits of a solid floor have convinced me to never look back at raised floor except as nostalgia. I suspect that I am in the minority though, as so few people have had the opportunity to experience both options first-hand. Inertia has lead people to only think of datacenters in the context of raised floors.

Do you agree? Or do you think I’m wrong? Let me know in the comments.

Murphy’s Law

wtf

So yesterday two new clients moved into our facility. I always do my best to make sure that these initial experiences are positive ones. One of the clients was a half-rack and they were just doing preliminary installation of some network gear prior to their eventual move of servers. That one went pretty well.

The other one was a bit more complicated, but certainly within the realm of “simple.” They wanted a full rack, with an adjacent empty rack for future growth. They also needed a 120V/30A electrical circuit. These are becoming fairly standard in full-rack installs due to the electrical requirements of 1U servers. “Standard” electrical for a rack is two 120V/20A circuits. This was fine in the old days but it won’t cut it with 42U of multi-CPU boxen.

We have several racks and cabinets with 30A “twist-lock” circuits, but only one has an adjacent empty rack. In reality it isn’t empty, it has one server in it, but that server is an old, almost retired d.f DNS server, so I figure this is an ideal location for the new client. It is also RIGHT underneath the hot air return of our primary HVAC system… the perfect place to put high-density, heat-creating clients. The HVAC runs much better when it has very hot air going into it.

I make sure that the rack is prepped and ready. Kevin, or facilities manager makes sure the 30A circuit is there and working. I gather up all the tools the client will need: cage nuts, rack screws, screwdrivers, tie-wraps, velcro-wraps, that damn little cage-nut tool to save your fingers from certain damage, a bubble-level, etc. All this is on a cart near their rack. I put the step ladder near their rack. I pre-wire their network connection (with switch-configuration help from Kyle, my awesome network manager.) I even put a trash can next to their rack, charge the battery of the cordless drill, and fetch a cart for them to bring their servers up the elevator. All that’s left for me is to wait for them. What could go wrong?

Plenty it turns out. Of course, they are doing a late-night Friday cut-n-run from their previous provider, so Murphy rears his ugly head.

First, they email me around 4:30 PM and send me a list of what they are bringing. They had requested a 30A twist-lock circuit, but I note the lack of a 30A twist-lock power strip. Usually when a client requests a 30A, they have the gear to handle that sort of circuit. These guys don’t. Of course, this is not the sort of item you can go pick up at Fry’s… or even Graybar. They usually require ordering and a two week wait. These guys be here in a few hours. I tell their sales guys that they don’t have one and we scramble to find one. Kevin calls Graybar, on the off chance they have one… Nope. We call a few other electrical supply places. Nada. We do have a large client… our second largest, who has several racks they JUST purchased, one of which is still empty, that have twin 30A twist-lock power strips. I suggest to the sales team that they ask real nice. This wonderful client agrees to exchange one for a later order. I pull it from their rack and prep it for install. Later, as I’m in our in-the-process-of-relocating build-lab looking for the chargers for a cordless drill’s batteries, and I discover… a 30A twist-lock power strip! Sigh. Typical Friday night install…. grr.

The new client arrives around 11 pm. I greet them, show them where they’re going to install, and hand them off to the Nick on night shift. I start driving home. I didn’t mind waiting until late to drive home as the Friday night commute for me going north is always miserable. They originally stated a 8-9PM arrival time, but were a few hours late. “Even less traffic” I thought! Wrong. The I-5 express lanes were closed by the time I reached them, and they had closed three lanes in Lynnwood for construction, so I waited in an interminable backup at the north end of town. Then, later I’m flying along and pass a Lexus SUV and my Valentine detector goes bonkers with a laser warning. This is the area of I-5 where the WSP heavily enforces the speed limit and I’m getting lasered from behind doing over 25 MPH over the limit… I figure I’m completely screwed.

The detector just doesn’t shut up… keeps going off in that REALLY loud “SLOW DOWN NOW YOU IDIOT” tone that Valentine has chosen for the laser warning. I’m dropping speed as fast as I can and keep looking in my rearview for the flashing light… but instead the Lexus passes me and the alarm stops. After a bit I speed up again, and sure enough, passing the Lexus produces the screeching alarm again! They must have something coming off the front of the SUV that is triggering the laser alarm. I try to put distance between me and the annoying SUV, and of course THEY SPEED up too! I even try the “Diesel smoke screen trick” and that doesn’t help. I drive the last 8 miles of my I-5 route in north Snohomish county with my laser signal going off every few seconds.

As I’m about to exit, my cell phone rings and it is the office. Nick Rycar tells me that the client’s servers won’t fit in the rack we’ve prepared. Whisky. Tango. Foxtrot. ?? !! A 19″ rack is a 19″ rack… this is an industry standard ferrchrissakes… how could they not fit? Nick tells me there is some sort of lip on the vertical posts of the rack and their Dell rails won’t fit on them. I’m thoroughly confused. We’ve had these racks for years and have never had an issue fitting a server into them… ever. He describes the servers as being “too wide” to fit in them.

First off, I hate talking on the phone while driving, so I never am able to think analytically while making forward progress in a motor vehicle. To me driving is a Zen-like activity that consumes all my CPU cycles. It is one of things I really enjoy about driving. So I’m trying to figure out what Rycar is talking about and it just doesn’t compute in my brain. I ask if they are using the cage nuts in a vertical orientation, because I know they allow for a certain amount of horizontal drift off the 19″ width. He doesn’t know what I’m talking about. I tell him that in about 5 minutes I’ll be at home and he can call me on my extension as I’ll be out of cell range.

I arrive home and do the stealthy stumble through the dark house in an attempt to not wake anyone up, and wander off to the bedroom where my office is because I know I’ll be getting calls on my IP phone there. Sure enough, I can’t log into it for some dumb reason. I pop open my laptop and thankfully Kyle is online (doing is scheduled maintenance on the BGP routers) and assists me with the IP phone-fu to get myself logged in. I call the datacenter and Nick assures me that they’ve swapped out the vertical rails in the rack with some from another rack that look like they’ll work. (the photo above is from his cell phone, which he shared with Kyle when they were trying to figure out why the Dell rails would not fit in the posts.) Satisfied, I nod off to sleep.

Like driving, sleep is something I’m very good at. I can sleep through anything. I’m lying there.. probably an hour after falling asleep when my IP phone starts ringing. I have no idea how many times poor Nick called my extension and dropped to voice mail before waking me up… but the ringing finally pulled me back to semi-consciousness. I stumble towards my desk, and pick up… just as it goes to voicemail… again. I call the datacenter extension. Rycar answers and says the other vertical posts ALSO don’t fit. He says they’ve given up on getting the server rails mounted and they’re going to try putting them on shelves in the racks. I’m ok with that and likely am asleep before I even get fully horizontal.

About an hour later, who knows… the phone wakes me up again. Nick has the client who wants to talk to me. I’m really not very coherent, but the guy says basically “this isn’t working… we have three choices: find somewhere else to locate here, (I forgot the other… remember it was 3:30 am and I was just awoken), or move back to our old provider.” I don’t recall much else of what was said, but I do recall a phrase… “we are really disappointed in digital.forest”

Ouch. That hurt. Here I’d made sure that everything was just right… and some oddball hardware fitment that issue nobody could have predicted had caught us all unaware. Like I said to Nick earlier on the phone when I was driving home: “A 19″ rack is a 19″ rack. This is an industry standard. It makes no sense that something would not fit.” Of course, now as I’m fully awake and typing this I realize that they had Dell gear, and there was a time period where Dell gear was sold with Dell-specific mounting hardware, designed specifically to fit in Dell-supplied, Dell-labelled, Dell-sold racks. Maybe that is what bit us in the ass? I don’t know, and won’t know until Monday when I can have a look at this stuff myself.

That stinging phrase wakes me up enough to have a moment of clarity and I suggest that there IS another cabinet, one of our 23″ wide ones (that we have modified to be 19″) at the end of row 11 with a 30A electrical circuit (two IIRC) that is available. The only reason I didn’t suggest it to start with is due to the fact that there is not an empty one adjacent to it. The client could care less about their future expansion needs at the moment, as he just wants to get their stuff online right now. They agree to move there. I speak with Nick a bit, and let him know to only move them to an electrical circuit off PDU#1, as PDU#2 is at capacity (PDU#4 will be installed soon!) He checks and sure enough, both circuits are from PDU#1, so I hang up and assume the train should be back on the rails, if you’ll pardon the pun.

I’m awake now… it seems. It is almost 4am, so I flip on the TV, hoping to catch the start of Le Mans on Speed TV. Nope… they are running an infomercial about a drill bit sharpener… sigh. I flip it off, lie down and fall asleep instantly.

Sue comes in around 10 am and wakes me up to tell me she’s off to run some errand or whatnot. I drag my ass out of bed. I’m in that weird state of sleep-shift… almost like jet lag. Disoriented due to waking up so off my usual schedule. I flip on the TV, still on Speed, hoping to catch some Le Mans coverage. They are interviewing some pit technician and showing this NOC-like setup of 20 flat screen monitors showing car telemetry and whatnot. Then I discover this isn’t Le Mans at all… as they pan up to show the skyline of Indianpolis. It is qualifying and the USGP. WTF? There is an actual race going on, I don’t want to see qualifying! Besides, there aren’t even cars on the track at Indy, so they play commercials and teasers for more NASCAR coverage… (yawn) for fifteen minutes! …at the end of which they say Le Mans coverage will begin again at 2pm PDT. Crap. I flip it off… literally and figuratively.

Murphy’s Law indeed.

Datacenter Density Illustrated

I was in a meeting with what was a potential client, and now will be a new colocation client starting later this week. They came to us asking for three racks. We asked them about their equipment and it totalled up to about 40U worth of gear. So logically, we asked them “why would you want to pay for two empty racks?”

They then related a story about asking their current provider for more power, and being that not only could they not have it, that they’d have to remove servers from their racks. They had one rack’s worth of gear spread out over three racks. They were paying for full racks but only being allowed to use on-third of each rack due to power restrictions.

Needless to say, I was amazed.

I won’t name the other provider (though the initials of their facility here in Seattle are “F.P.”) but I do find it odd that a service provider would put a client into such an odd position. If you pay for a full rack, you ought to be able to utilize every rack unit, right? That is just part of the digital.forest difference. You need it? We got it. We can do custom power. We can do high density. We can help you succeed.

Mind you we are not giving this stuff away. We are not the least expensive option for Seattle area colocation. BUT, if you examine our offerings closely, you will find the BEST VALUE FOR YOUR DOLLAR, precisely because we can accommodate most every infrastructure need, at a very reasonable price. We will not tell you what you CAN’T do, we help you accomplish what you need to do. Don’t let your colocation provider hold you back. Choose digital.forest.

cables & conduits

cables

conduit

I love looking up when in our datacenter and seeing all the well-ordered cable and conduit… for some reason it is very visually appealing. The camera can not adequately capture it because wider-angle lenses distort the straight lines, and longer focal lengths just capture a small slice of the wonder.

Big Fiber and Electrical conduits all bent around like an exhaust manifold…
Hundreds of strands of UTP all bundled and laced…
Big DC power busses neatly arrayed…
Fiber-optic cables and innerducts going hither and yon…

call me weird, but I could stare at this stuff all day long.