Published By : 25 Apr 2016 | Published By : QYRESEARCH
Apache Hadoop came into existence due to the need to process mammoth amounts of data. This all started with the World Wide Web as it evolved in the late 1900s and early 2000s and posed the need for automation of information. From dozens to million pages, as the web generated more and more data, it was beyond human capabilities to return search results as it was carried out in the early years. To address this, search engines and indexes were developed to assist in finding relevant information amid textual content.
Invented by Google, MapReduce is the first framework that facilitated processing of huge amounts of data. It is Google’s publication on the MapReduce framework that Doug Cutting and Mike Cafarella applied in their own open source search engine project called Nutch. Thus, Hadoop was developed wherein data was distributed and processed across several computers so that multiple tasks could be performed simultaneously. This helped search engines to return results faster.
In 2006, Cutting joined Yahoo and after a series of functional agreements carried out between the two, Nutch was split. The web crawler segment remained as Nutch, whereas the data processing and data computing segment became Hadoop. In 2008, Yahoo released Hadoop, and today its technology ecosystem and framework is managed and maintained by the Apache Software Foundation.
Presently, the rising demand for faster access to relevant data for everyday functioning across industry sectors is favoring growth of the Hadoop market. With the emergence of Internet of things (IoT) and the rapid connection of electronic devices, large amount of data is generated that needs to be stored on cloud to be accessed from any part of the globe. This data needs to be scrutinized as well to be utilizable, thus providing immense growth opportunities for the Hadoop maret. Moreover, consistent progression of Hadoop technologies and increasing investments for enhancements are fueling demand for Hadoop.