The world of data storage engineering have been very active the last years. Different paradigm have been explored to deal with the large amount of informations that keep growing over the years. This goes from shared everything (the good old filesystem) to shared nothing architecture redesign. Nowadays, the shared nothing design is in front of the scene, pushed by the Google engineering task force (sharding with their map reduce design). The main problem of shared nothing design sit on the Brewer CAP theorem : you can't have data consistency along with persistence and availability, you have to choose two of the three.
But the computer hardware makers have made major breakthrough with cheap supercomputer like parallella or tilera. They have been done to address the scalability issues in super-calculator made of cheap computer put in a very fast network grid. But they also address the CAP issues as a side effect. The "network grid" is now on the board and you can think of it like a math co-processor on steroids. You can now plug on such cheap supercomputer normal disks with a good plain old filesystem and you can use the grid and the storage for parallel computing without thinking about the CAP issues.
Cool isn't it ?
So the future of data is probably less complex that one can think with the miniaturization of supercomputer grid.