Thin provisioning is a very important feature on modern storage arrays. Almost every storage vendor has this capability, but if you don’t over commit your storage you are missing the boat. Lets start off at a a high level with what a traditional storage array looks like without thin provisioning. In the diagram below we have two things; provisioned space and free space.
Once you start thin provisioning you introduce the concept of consumed space. When you start caring about consumed space, instead of just provisioned space, free space (as defined above) is more aptly defined as unallocated space; since your actual free space is simply unused storage. Below is the same array as above (8TB provisioned storage), but here we made everything thin, and of the 8TB we assigned there is only 2TB actually in use. This leaves us with 8TB free on the array.
It is pretty easy to see why thin provisioning is so great. You could now use that extra 6TB of provisioned space to do something like storing more of your array based snapshots. Of course we want to maximize the potential of our array and thin provisioning is the gateway drug to storage over commitment. By over committing your storage you can provision more space than you actually have installed on your array. In the diagram below we take the same array with 10TB of total usable storage and configure our attached systems to use a combined 14TB.
Everything is now looking great. You have added more storage to your system than you actually had available. Your companies CFO might give you a hug when he sees all the money you saved the business! Before you collect on that warm embrace be prepared for the next step; users start putting the storage to work, and you start running out of free space.
Now you are in the danger zone. You only have 2TB of free space, but you have 6TB assigned to your systems. This is where efficiency features on your storage arrays can come to the rescue. Not all arrays are created equal so you may or may not have these features, but deduplication and compression are two of the most common. When you use these features think of it like a trash compactor putting downward pressure on your consumed storage. After all, the data you are compressing and deduplicating is trash anyway (duplicate and redundant data blocks that do not need to be stored).
This is going to seem contrary, but over committing your storage makes the most sense when you have a lot of storage (or at least a decent degree of excess). When you decide to over commit you are working off the principal of shared risk. The more storage you have the less risk there is to over commitment. You certainly don’t have to have a lot storage to over commit, and using storage efficiency features can give you the buffer you need to feel comfortable.
Once you go down the over commitment path you need to manage it. Monitor your consumed space vs provisioned space, setup automated alerts, and audit them regularly to make certain they work as expected. Also make certain to monitor your performance. When you start stacking up systems on your storage, even though you may now have the space, you still only have a certain number of IOPS.
Storage Over Commitment Guide
Thin provision your volumes and LUNs
Over provision your storage to drive up storage array utilization
Make sure you have a good amount of buffer storage before you over commit
Pro-actively monitor your space consumed vs space provisioned.
Use storage efficiency features to reclaim consumed space
Make sure you don’t over commit your performance
Please write your feedback below…
Thanks for your wonderful Support and Encouragement