The concepts of Big Data have been around well before the introduction of the Hadoop file system. Back when I was in college (ancient times, according to my family) and pursuing my degree in astrophysics, I wrote a thesis on ‘stellar spectroscopy’ – researching the spectrum of data from electromagnetic radiation and visible light which radiates from stars and other celestial objects to determine properties, chemical compositions, and Doppler shift motion. That description alone tells you that there can be volumes of data involved. At times it was tedious, poring through vast amounts of data from these celestial sources, giving new meaning to data sources “beyond the cloud.” Back then, we stored the data in an early form of NFS (Network File System) and dealt with archaic connectivity and reams of computer paper filled with composition graphs and numbers. Today, of course, we have much more sophisticated methods of storing and processing data. With the advent of Hadoop, high-performance connectivity, and tools to help process the structured and unstructured data, our new world offers great leaps in how we analyze, interpret, and act on what the data tells us.