Data inflation

5milmkbkYou know how some countries just keep printing money? They see their money deflate and deflate more and more over time. You end up with insane paper bills of 10 million somewhollars or 2.5 gazillion whateverollars that still only buy you a cup of coffee and a morning paper. (would be pretty meta if you end up with a paper note pad costing more than it’s equivalent amount of paper money. How would that work?)
This is called hyperinflation. Countries who go through such a period need a reset now and again. For instance, Germany experienced hyperinflation right after WW1 and had it’s biggest devaluation of the Deutsch Mark in 1923.

Just one year earlier Germany’s “biggest” bank note was 500 Marks. In 1923 the biggest bank note was one hundred billion Marks, that’s a 1 with 14 zero’s! In the Germany of 1923, you’d better spend your money today because the price would be twice as high the day after tomorrow!

Something similar is going on with digital information. Albeit in a different pace, the amount of digital information expands impressively fast. We had around 160 billion gigabytes in 2006 and about 800 billion gigabytes of digital information to store in 2008, about 1200 billion in 2010, 2700 billion in 2012 and 4000 billion in 2013.
And we’re talking gigabytes here, not bytes, kilobytes or megabytes. And billions of them not thousands or even millions.
I hear your say “Yeah, but we have Zetta!”. (For the not-so-nerdy among you; a zettabyte is a 1000 billion gigabytes.)

I acknowledge that. It’s just that there’s 2 obvious problems with Zetta:
1. It sounds like crap! It sounds like something a mad villain with a bad German accent would say in a 60’s Bond movie: “Iz zetta byte?”.
2. What comes after zetta? Because this is just the freaking beginning… Anybody who thinks that we’re about to hit a plateau on the amount of data we create on a daily basis is either living on a different planet or hasn’t been outside for a while. (Actually “yottabyte” comes after zettabyte. That doesn’t sound a lot better and there’s no successor to Yotta yet…)

So I think it’s time for a whole new naming strategy for storage capacity. And because there’s not really anything opposed to generating more and more digital stuff to store, the naming should have to take inflation into account.
To clarify: On the most basic, abstract level: the size of a storage thingy should be indicated with “fit” or “no fit”. You should buy 1 fit or a couple of no fit’s. (That should fit)
And next year, the new “fit” you can buy is bigger because we’ll be creating more data next year.
I think the one measurement which comes closest to this idea is a lightyear.The distance a ray of light travels in a year.

Something similar should be available for measuring digital information:
The capacity needed to store the amount of X generated by Y in Z amount of time
Although I really believe digital information size indication should have this kind of logic in it, I don’t know how this should be defined. Other people should do this. I flunked math.
But for instance it could have something to do with the average amount of hours uploaded to YouTube every minute last year. E.g.: I’m going to get a hard drive for my videos and it needs to be half a YouT. Meaning I can store about 150 hours of video if I buy it in 2015 (https://www.youtube.com/yt/press/statistics.html)

That would make a lot more sense to me than something like 2 terabytes, a petabyte or whatever. It would also be a lot more sustainable because there must be some kind of relation between the amount of video people actually produce and the amount of video they upload.

One last thing; I think the units should actually be called Moores. You can think of reasons why.