Product and service reviews are conducted independently by our editorial team, but we sometimes make money when you click on links. Learn more.

The New Model of Data: Big Data And The Cloud

The New Model of Data: Big Data And The Cloud

Editor's Note: This article is part of our Future of Business Technology series which focuses on what is happening to business today as a result of technology, and in turn, what's happening to the economy, the job market and IT careers.

"Database management involves the sharing of large quantities of data by many users -- who, for the most part, conceive their actions on the data independently from one another," wrote Dr. E. F. Codd, the creator of the relational model for data. "The opportunities for users to damage data shared in this way are enormous, unless all users, whether knowledgeable about programming or not, abide by a discipline."

In Dr. Codd's world, data was a fluid substance that needed to be kindled and cultivated in the storehouses of his time, the "data banks." His argument was that the only way to ensure the accuracy of data in its ability to reflect some sort of truth, and its efficiency in being able to point the way toward a possible course of action by a business, was through the universal enactment of a set of standards and practices, and the implementation of a common language -- his discipline, which led to SQL. In a modern context, one could say Codd advised a networking of the people who administer data, coupled with a centralization of data around an economy of principles.

The first commercial Internet ran contrary to Codd's advice. It established a torrential sea of asynchronously communicating network hosts, all capable of servicing each other's requests for data anonymously. The first efforts to establish some kind of centralized core, retaining a universal directory of data published on the Web by all its participants -- the first Web portals -- eventually failed. What did succeed was the search engine: a device that scans the contents of published documents after the fact, and generates an index of their content based on semantic relationships -- educated guesses. To this day, search engineers refine the means by which those guesses are educated. The Web has been an expanding mass of text, having yielded fewer tools for businesses, governments, and schools to organize and make sense of it all, than originally promised.

About the Author

Scott M.Fulton, III has chronicled the history of computing as it happened, from the unveiling of the Apple III to the undoing of MS-DOS to the rise of the cloud. He's the author of 17 books and over 5,000 articles. Scott and his wife Jennifer run Ingenus, LLC, an editorial services provider for technology and higher education publishers. You can follow Scott on Twitter at @SMFulton3.

More from Scott Fulton

Yet a new and emerging model of the Internet is taking the place of the original model, such as it was. Cloud servers with extensive virtualization have changed what it means to be a "host." An IP address becomes a point of contact for a much larger, more pliable and mutable construction of processors, memory, and storage. In this world, a database looks more and more like a compromise, hybridized amalgam of Codd's tower of perfect relations and the wild, wild Web.

There is, at last, some hope. The new and rapidly evolving industry of big data is centered around a cluster of technologies that enables vast amounts of unstructured, unprocessed data to be useful, practical, and analyzable without the need for vast, contextual indexing after the fact.

More In This Series:

[Shutterstock Image Credit]