In a recent conversation with a colleague in the image archive services industry, he posed an interesting proposition that is extremely relevant to today’s growing interest in Enterprise Image Storage. The issue: how to estimate and acquire image storage capacity, and how do vendors price storage technology? The relevance of a whale? There’s an old expression about swallowing the whale a bite at a time versus swallowing the whale whole. When applied to image storage capacity, does one acquire the whale all at once or a “byte” at a time!
The imaging arena is a rapidly changing environment. Technologies such as CT and MRI have seen rapid changes in terms of new scanning technology and new scanning protocols that result in substantially larger data sets compared to a few years ago. Digital Mammography is also quickly ushering in larger data sets as the technology’s efficacy has been substantiated. On the other hand, regulatory policies may have the opposite effect as reimbursement levels are impacted. The multi-slice CT that was going to increase procedure volumes may be underutilized due to reimbursement issues. The dilemma: how to accurately estimate storage requirements in an ever-changing world?
The Impact on Image Storage Requirements
The challenge is how to size storage requirements and plan for them. Classically, both vendors and customers alike have attempted to size requirements on the basis of procedure volume. If I know how many CT procedures are done annually, and I know the average number of images per procedure, and that the typical CT image is 0.5 megabytes in size (515x512 pixels times 2 bytes per pixel), one can easily calculate the amount of terabytes of storage required for CT.
Unfortunately, estimating study size can be daunting, as there is no such thing as an “average” study size. Another approach might be to use historical storage capacity and ratio it based on anticipated procedure volume changes.
Several vendors in developing licensing or storage fees have suggested an alternative approach which is to assume that facilities of a certain characteristic are consistent in terms of usage patterns for technology. Therefore, if facility A with 20,000 patient encounters annually required 10 TB of storage, then it is reasonable to assume that facility B with a similar quantity and type of patient encounters will require 10 TB.
In all cases, my suspicion is that there is yet to be enough accumulated evidence to accurately predict storage requirements.
Planning Storage Requirements
So what is one to do in terms of planning for storage requirements? How does one go about estimating and planning capital requirements to assure demand is met? Back to the whale – why not just swallow the whale all at once? The bet is that the added cost of acquiring additional capacity today is offset by the cost of attempting to acquire additional capacity in the future as a budget contingency.
Another approach is a balanced strategy of planned acquisition, and a plan for contingency capacity, such as outsourcing the unplanned capacity. Alternatively, if outsourcing is an attractive option, one could negotiate a flat rate up to the expected limit, with contingency rates for overcapacity. While this strategy addresses the customer’s requirement, it shifts the capacity planning and capital requirement to the outsourcer. Of course, an advantage they have is that they are planning for multiple customers and can spread the capital costs appropriately.
Are there industry models that can be followed? For example, document imaging has a head start in terms of deployment. What has been the experience with document imaging requirements? Are there capacity and pricing models that could be applied to medical imaging? And, while there may be models emerging for today’s imaging requirements, what about future requirements for emerging “ologies”? How does one go about estimating the capacity of pathology storage requirements? Sleep Lab? And, on and on.
An Opportunity
Perhaps there are lessons to be learned from other “utilities” that could be applied to Enterprise Image Storage planning. For example, electrical and gas utilities have significant data upon which to predict usage patterns for a family of four in a two story house of a certain square footage in a particular climate. Therefore, why couldn’t one rely on historical data for existing sites to predict for future ones?
This would seem to represent an opportunity for entities that have been doing image storage for some time. Might this be a service they could offer prospective clients? Might it also represent an opportunity to develop the statistical models for predicting storage requirements? Imagine the value to the market to be able to predict the capacity requirements should one be contemplating storage of pathology data. Is this an opportunity for one of the standards organizations to undertake? Given the image standards effort of the ACR and NEMA toward DICOM (Digital Imaging and Communications Standard), might this be something that they could tackle? Or how about HIMSS?
I would be interested in both imaging facility and vendor perspectives. Is this something facilities would pay for? Would it take a considerable effort to data mine from historical data? Does it represent a competitive advantage if one can better predict, and demonstrate true savings? Your thoughts?