APACHE HADOP

What is Apache Hadoop?

 

Apache Hadoop is an open source software framework that enables large data sets to be fragmented into smaller blocks, scattered to manifold servers for storage and data processing. The power of Hadoop comes from the Hadoop cluster which processes data sets faster than any single device.

Hadoop Architecture

The architecture of Hadoop is mainly divided into 4 parts:
Hadoop Common
HDFS (Storage part)
MapReduce (Processing Part)
Yarn (Resource Navigator)
Hadoop Common:
This component includes the libraries and utilities used by other Hadoop modules.

 

Hadoop Distributed File System (HDFS)
It is the framework responsible for storage of the data blocks. HDFS adopts the master and slave architecture, where Master carries a NameNode which takes care of the metadata and the slave carries the DataNodes which stores the real data.

Hadoop MapReduce
It is the software layer responsible for processing of the data sets by two steps:

Map: A Master Node breaks the task into smaller tasks and then assigns them to different worker nodes.
Reduce: The Reduce Step collects the output from the worker nodes and assembles them to produce the required output.

 

Yarn 
It is the resource negotiator accountable for offering the computational resources required for executing applications. It schedules the jobs with the help of resource manager and application manager.

Other Hadoop Components

Ambari: It is a web-based interface for handling, arranging and testing Hadoop clusters to take care of its elements like HDFS, MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop.

Cassandra: An open source highly scalable distributed database system based on NoSQL contributing to high availability with no failure.

Flume: A distributed tool for efficiently collecting, and transferring bulk of streaming data into HDFS.

HBase: A non-relational distributed database that stores large structured data.

HCatalog: A storage manager that allows accessing and sharing of data.

Hive: It is a data warehouse infrastructure that allows summarization, querying, and analyzing of data with the help of a query language similar to SQL.

Oozie: It is the scheduler and manager the Hadoop jobs.

Pig: An advanced platform responsible for manipulating the data stored in HDFS with the help of the language called Pig Latin.

Solr: A highly scalable search tool enabling configuration and recovery.

Spark: A free and fast tool for streaming and supporting SQL, machine learning and processing graphs.

Also check Hadoop vs Spark to know more.

Sqoop: A mechanism that transfers huge bulk of data between Hadoop and its structured databases.

Zookeeper: An open source application that organizes and coordinates the distributed systems.

What makes Apache Hadoop Important?

Big data has enforced companies to make use of such technologies that can store and manage complicated and unstructured data without any data loss and delay. This budded the growth of Hadoop as it is an open source that handles data with zero failure.

 

  • Primary benefits of Apache Hadoop
    Proficient enough to Store and Process Complex Data Sets: More data leads to increased data loss. But Hadoop’s capability to store large data keeps it high on demand.
  • Excessive Computation: Its computational capacity allows high speed big data processing with several data working nodes running in parallel
  • Lesser Faults: While processing of data, whenever any node fails, the job is automatically transferred to a different node encouraging less failure.
  • No Pre-processing Activity Needed: Hadoop allows huge bulk of unstructured as well as structured data to be stored and processed without any pre-process activity.
  • Highly Scalable: Hadoop’s architecture can be expanded from one to 1000s of servers without extra administration.
    Cost-Effective: Hadoop is an open source framework and hence costs no money
Contact Us

If you’d like us to contact you, please fill out the form.

Not readable? Change text. captcha txt