Are Data Tsunamis Inevitable Or Man Made Disasters?

What happens when people who have no real knowledge and context about how to handle data, infrastructure or applications insist on being in charge and need to be seen as taking strong decisive actions without ever being held responsible? It leads to real bad, often silly decisions with a bunch of unintended consequences. Storage vendors love this. More iron to sell. And yes, all this is predictable. When I’m able and allowed to poke around in storage and the data stored I often come to the following conclusion: there’s a bulk amount of data that is stored in an economical unsound fashion. Storage vendors & software vendors love this, as there are now data life cycle management tools & appliances to be sold.

The backlash of all this is? Cost cutting, which then leads to the data that has valid needs to be stored and protected not getting the resources it should. Why? Well who’s going to take responsibility to push the delete button to remove the other data? As we get ever better technology to store, transport and protect data we manage to do more with less money and personnel. But as is often the case, no good deed goes unpunished. Way to often these savings or efficiencies flow straight into the bottomless pit caused by that age old “horror vacui” principle in action in the world of data storage.

You get situations like this: “Can I have 60TB of storage?  It’s okay, I discussed this with your colleague last year, he said you’d have 60TB available at this time frame”

What is the use case? How do you need it? What applications or services will consume this storage? Do you really need this to be on a SAN or can we dump this in cost effective Windows Server Storage Spaces with ReFS? What are the economics involved around this data? Is it worth doing? What projects is this assigned to? Who’s the PM? Where is the functional analysis. Will this work? Has there been a POC? Was that POC sound? Was there a pilot? What the RTO? The RPO? Does it need to be replicated off site? What IOPS is required? How will it be accessed? What security is needed? Any encryption required? Any laws affecting the above? All you get is a lot of vacant blank stares and lot’s of “just get it done”. How can it be that with so many analysts and managers of all sorts running around to meeting after meeting, all in order to get companies running like a well oiled slick mean machine, we end up with this question at the desk of an operational systems administrator as a result? Basically what are you asking for? Why are you asking this and did you think this through?


Consider the following. What if you asked for 30 billion gallons of water at our desk and we say “sure” and just sent it to you. We did what you asked. Perhaps you meant bottled drinking water but below is what you’ll end up with. And yes it completely up to specifications, limited as they are.


The last words heard while drowning will be “Who ordered this? You can bet no one will be responsible, especially not when the bill arrives and when the resulting mess needs to be cleaned up. Data in the cloud will not solve this. Like the hosting business, who serve up massive amount of idle servers, the cloud will host massive amounts of idle data as in both situations it’s providing the service that generates revenue, not the real use of that service by you or it’s economic value to you.

One thought on “Are Data Tsunamis Inevitable Or Man Made Disasters?

  1. Well said. I seem to run into this problem very often in my company, and it astounds me that so many people don’t understand the idea of specifications and scope.

    This behavior is followed by clarifying questions to determine the scope, and then IT gets blamed for being roadblocks to critical operations.

Leave a Reply, get the discussion going, share and learn with your peers.

This site uses Akismet to reduce spam. Learn how your comment data is processed.