It’s more than a buzzword—Big Data is a big deal. And though we’ve been creating, collecting and analyzing data forever, the current explosion in digital technology gives us access to an ever-expanding treasure trove of information that’s changing the way we’re conducting research, making business decisions and much more. Of course, Georgia Tech stands right in the middle of the action.
Humanity generates data at a dizzying pace. By 2020, the amount of data created worldwide is expected to hit 44 zettabytes—the equivalent of 40 trillion gigabytes, according to IDC Research.
Yet some computer science researchers wince at the now-popular term “Big Data.” They point out, correctly, that volumes have been getting bigger for decades, as the cost of storage has tumbled, and as the things we produce and consume—documents, media, business applications and even social interactions—have become digital.
And if you thought 300 to 500 million tweets per day or 300 hours of video uploaded to YouTube per minute are impressive numbers, hang onto your hat. The tsunami of human-created data will soon be outpaced by a constant stream of data flowing from devices: sensors in smartphones, cars, homes, medical devices and machinery, to name but a few pieces of the rapidly growing Internet of Things (IoT).
“One thing we’re seeing in several domains, is data spiraling up faster than our ability to analyze it,” says Srinivas Aluru, professor of computational science and engineering, and co-director of the Institute for Data Engineering and Science at Georgia Tech.