Apache
Hadoop
Apache Hadoop
|
|
1.2.1 / August 1, 2013
|
|
2.0.5-alpha / June 4, 2013
|
|
Development status
|
Active
|
Written in
|
|
Apache License 2.0
|
|
Website
|
Apache Hadoop
is an open-source software framework that
supports data-intensive distributed applications,
licensed under the Apache v2 license. It supports the running of applications
on large clusters of commodity hardware.
The
Hadoop framework transparently provides both reliability and data motion to applications.
Hadoop
implements a computational paradigm named MapReduce, where the application is divided into many small fragments
of work, each of which may be executed or re-executed on any node in the
cluster.
In
addition, it provides a distributed file system that stores data on the compute
nodes, providing very high aggregate bandwidth across the cluster.
Both
map/reduce and the distributed file system are designed so that node failures
are automatically handled by the framework.
It
enables applications to work with thousands of computation-independent
computers and petabytes of data.
The
entire Apache Hadoop “platform” is now commonly considered to consist of the
1.
Hadoop kernel,
as well as
a number of related projects – including
Hadoop is written in the Java programming language and is an Apache top-level project being built and used by a global community of contributors.
Hadoop
and its related projects (Hive, HBase, Zookeeper, and so on) have many contributors from across the
ecosystem. Though Java code is most common, any programming language can be
used with "streaming" to implement the "map" and
"reduce" parts of the system.
History
Cutting,
who was working at Yahoo at the time, named it after his son's toy elephant.
Architecture
Hadoop
consists of the Hadoop Common which provides access to the file systems
supported by Hadoop.
The Hadoop
Common package contains the necessary Java ARchive (JAR) files and
scripts needed to start Hadoop.
The
package also provides
1.
source code,
2.
documentation and
3.
a contribution section that includes
projects from the Hadoop Community.
For effective scheduling of work, every
Hadoop-compatible file system should provide location awareness: the name of
the rack (more precisely, of the network switch) where a worker node is.
Hadoop applications can use this
information to run work on the node where the data is, and, failing that, on
the same rack/switch, reducing backbone traffic.
The Hadoop Distributed File System (HDFS) uses this method when replicating data to try to
keep different copies of the data on different racks.
The goal is to reduce the impact of a
rack power outage or switch failure, so that even if these events occur, the
data may still be readable.
A multi-node Hadoop cluster
A small Hadoop cluster will include a single master and multiple
worker nodes.
The
master node consists of
1.
a JobTracker,
2.
TaskTracker,
3.
NameNode and
4.
DataNode.
A slave or worker node acts as
both a DataNode and TaskTracker, though it is possible to have data-only worker
nodes and compute-only worker nodes.
These are normally used only in
nonstandard applications.
The standard start-up and shutdown
scripts require Secure Shell (ssh) to be set up between nodes in
the cluster.
In a larger cluster, the HDFS is managed
through a dedicated NameNode
server to host the file system index, and a secondary
NameNode that can generate snapshots of the namenode's memory
structures, thus preventing file-system corruption and reducing loss of data.
Similarly, a
standalone JobTracker server can manage job scheduling.
In clusters where the Hadoop MapReduce engine
is deployed against an alternate file system, the NameNode, secondary NameNode
and DataNode architecture of HDFS is replaced by the file-system-specific
equivalent.
File systems
Hadoop distributed file system
Each node in a Hadoop instance typically
has a single namenode; a cluster of datanodes form the HDFS cluster.
The situation is typical because each
node does not require a datanode to be present. Each datanode serves up blocks
of data over the network using a block protocol specific to HDFS.
It achieves reliability by replicating the data across multiple hosts, and hence does not require RAID
storage on hosts.
With the default replication value, 3,
data is stored on three nodes: two on the same rack, and one on a different rack.
Data nodes can talk to each other to
rebalance data, to move copies around, and to keep the replication of data
high.
HDFS is not fully POSIX compliant, because the requirements for a POSIX file system
differ from the target goals for a Hadoop application.
The tradeoff of not having a fully
POSIX-compliant file system is increased performance for data throughput and support for non-POSIX operations such as Append.
HDFS was designed to handle very large files.
HDFS added high-availability
capabilities, as announced for release 2.0 in May 2012 allowing the main
metadata server (the NameNode) to be failed over manually to a backup in the
event of failure.
Automatic fail-over is being developed
as well.
Additionally, the file system includes
what is called a secondary namenode, which misleads
some people into thinking that when the primary namenode goes offline, the
secondary namenode takes over.
In fact, the secondary namenode
regularly connects with the primary namenode and builds snapshots of the primary
namenode's directory information, which is then saved to local or remote
directories.
These checkpointed images can be used to
restart a failed primary namenode without having to replay the entire journal
of file-system actions, then to edit the log to create an up-to-date directory
structure.
Because the namenode is the single point
for storage and management of metadata, it can be a bottleneck for supporting a
huge number of files, especially a large number of small files. HDFS Federation
is a new addition that aims to tackle this problem to a certain extent by
allowing multiple name spaces served by separate namenodes.
An advantage of using HDFS is data
awareness between the job tracker and task tracker.
The job tracker schedules map or reduce jobs to task
trackers with an awareness of the data location.
An example of this would be if node A
contained data (x,y,z) and node B contained data (a,b,c). Then the job
tracker will schedule node B to perform map or reduce tasks on (a,b,c) and node
A would be scheduled to perform map or reduce tasks on (x,y,z).
This reduces the amount of traffic that
goes over the network and prevents unnecessary data transfer.
When Hadoop is used with other file systems this
advantage is not always available. This can have a significant impact on
job-completion times, which has been demonstrated when running data-intensive
jobs.
HDFS was designed for mostly immutable
files and may not be suitable for systems requiring concurrent write operations.
Getting data into and out of the HDFS
file system, an action that often needs to be performed before and after
executing a job, can be inconvenient.
A Filesystem in Userspace
(FUSE) virtual file system
has been developed to address this problem, at least for Linux and some other Unix systems.
File
access can be achieved through
a.
C++,
b.
Java,
c.
Python,
d.
PHP,
e.
Ruby,
f.
Erlang,
g.
Perl,
h.
Haskell,
i.
C#,
j.
Cocoa,
k.
Smalltalk, and
l.
OCaml,
Other file systems
By May
2011, the list of supported file systems included:
·HDFS:
Hadoop's own rack-aware file system.
This is designed to scale to tens of petabytes of storage and runs on top of the
file systems of the underlying operating systems.
This is targeted at clusters hosted on the Amazon Elastic Compute Cloud server-on-demand infrastructure.
There is no rack-awareness in this file system, as it is all
remote.
This system provides inherent high availability,
transactionally correct snapshots and mirrors while offering higher scaling
than HDFS while giving higher performance.
Maprfs is available as part of the MapR distribution and as a native option on Elastic Map Reduce from Amazon's web services.
this stores all its data on remotely accessible FTP servers.
Hadoop can work directly with any distributed file system
that can be mounted by the underlying operating system simply by using a
file:// URL;
however, this comes at a price: the loss
of locality.
To reduce network traffic, Hadoop needs
to know which servers are closest to the data; this is information that
Hadoop-specific file system bridges can provide.
Out-of-the-box, this includes Amazon S3, and the CloudStore filestore, through s3:// and kfs://
URLs directly.
A number of third-party file system
bridges have also been written, none of which are currently in Hadoop
distributions.
·In
2009 IBM discussed running Hadoop over the IBM General Parallel File System. The source code was published in October 2009.
·In
April 2010, Parascale published the source code to run Hadoop against the Parascale
file system.
·In
April 2010, Appistry released a Hadoop file system driver for use with its own
CloudIQ Storage product.
·In
May 2011, MapR Technologies, Inc. announced the availability of an
alternate file system for Hadoop, which replaced the HDFS file system with a
full random-access read/write file system, with advanced features like snaphots
and mirrors, and get rid of the single point of failure
issue of the default HDFS NameNode.
JobTracker and TaskTracker: the MapReduce engine
JobTracker
Above the file systems comes the MapReduce engine, which consists of one JobTracker, to which
client applications submit MapReduce jobs.
The JobTracker pushes work out to
available TaskTracker nodes in the cluster, striving to keep the work as
close to the data as possible.
With a rack-aware file system, the
JobTracker knows which node contains the data, and which other machines are
nearby.
If the work cannot be hosted on the
actual node where the data resides, priority is given to nodes in the same
rack.
This reduces network traffic on the main
backbone network.
TaskTracker
If a TaskTracker fails or times out,
that part of the job is rescheduled.
The TaskTracker on each node spawns off
a separate Java Virtual Machine process to prevent the TaskTracker itself from failing if
the running job crashes the JVM.
A heartbeat is sent from the TaskTracker
to the JobTracker every few minutes to check its status.
The Job Tracker and TaskTracker status
and information is exposed by Jetty and can be viewed from a web
browser.
If the JobTracker failed on Hadoop 0.20
or earlier, all ongoing work was lost.
Hadoop version 0.21 added some
checkpointing to this process; the JobTracker records what it is up to in the
file system.
When a JobTracker starts up, it looks
for any such data, so that it can restart work from where it left off.
Known limitations of this approach are:
·The
allocation of work to TaskTrackers is very simple. Every TaskTracker has a
number of available slots (such as "4 slots"). Every active
map or reduce task takes up one slot. The Job Tracker allocates work to the
tracker nearest to the data with an available slot. There is no consideration
of the current system load of the allocated machine, and hence
its actual availability.
·If
one TaskTracker is very slow, it can delay the entire MapReduce job -
especially towards the end of a job, where everything can end up waiting for
the slowest task. With speculative execution enabled, however, a single task
can be executed on multiple slave nodes.
Scheduling
By default
Hadoop uses FIFO, and optional 5 scheduling priorities to schedule jobs from
a work queue. In version 0.19 the job scheduler was refactored out of the
JobTracker, and added the ability to use an alternate scheduler (such as the Fair
scheduler or the Capacity scheduler).
Fair scheduler
The goal of the fair scheduler is to
provide fast response times for small jobs and QoS for production jobs. The fair
scheduler has three basic concepts.
2.
Each pool is assigned a guaranteed
minimum share.
3.
Excess capacity is split between
jobs.
By default, jobs that are uncategorized
go into a default pool.
Pools have to specify the minimum number
of map slots, reduce slots, and a limit on the number of running jobs.
Capacity scheduler
The capacity scheduler supports several
features which are similar to the fair scheduler.
·Jobs are submitted into queues.
·Queues are allocated a fraction of the total resource
capacity.
·Free resources are allocated to queues beyond their total
capacity.
·Within a queue a job with a high level of priority will have
access to the queue's resources.
Other applications
The HDFS
file system is not restricted to MapReduce jobs. It can be used for other
applications, many of which are under development at Apache. The list includes
the HBase database, the Apache Mahout machine learning system, and the Apache Hive Data Warehouse system. Hadoop can in theory be
used for any sort of work that is batch-oriented rather than real-time, that is
very data-intensive, and able to work on pieces of the data in parallel. As of
October 2009, commercial applications of Hadoop included:
·Log
and/or clickstream analysis of various kinds
·Marketing
analytics
·Machine
learning and/or sophisticated data mining
·Image
processing
·Processing
of XML messages
·Web
crawling and/or text processing
·General
archiving, including of relational/tabular data, e.g. for compliance
Prominent users
Yahoo!
On
February 19, 2008, Yahoo! Inc. launched what it claimed was the
world's largest Hadoop production application. The Yahoo! Search Webmap is a
Hadoop application that runs on a more than 10,000 core Linux cluster and produces data that is used in
every Yahoo! Web search query.
There are
multiple Hadoop clusters at Yahoo! and no HDFS file systems or MapReduce jobs
are split across multiple datacenters. Every Hadoop cluster node bootstraps the
Linux image, including the Hadoop distribution. Work that the clusters perform
is known to include the index calculations for the Yahoo! search engine.
On June
10, 2009, Yahoo! made the source code of the version of Hadoop it runs in production
available to the public. Yahoo! contributes back all work it does on Hadoop to
the open-source community. The company's developers also fix bugs and provide
stability improvements internally, and release this patched source code so that
other users may benefit from their effort.
Facebook
In 2010 Facebook claimed that they had the largest Hadoop cluster in the
world with 21 PB of storage. On July 27, 2011 they
announced the data had grown to 30 PB. On June 13, 2012 they announced the data
had grown to 100 PB. On November 8, 2012 they announced the warehouse grows by
roughly half a PB per day.
Other users
Besides
Facebook and Yahoo!, many other organizations are using Hadoop to run large distributed
computations.
Some of the
notable users include:
Hadoop on Amazon EC2/S3 services
It is possible to run Hadoop on Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3).
As an example The New York Times used 100
Amazon EC2 instances and a Hadoop application to process 4 TB of raw image
TIFF data (stored in S3) into 11 million finished PDFs in the space of 24 hours at a computation cost of about
$240 (not including bandwidth).
There is support for the S3 file system
in Hadoop distributions, and the Hadoop team generates EC2 machine images after
every release. From a pure performance perspective, Hadoop on S3/EC2 is
inefficient, as the S3 file system is remote and delays returning from every
write operation until the data is guaranteed not to be lost.
This removes the locality advantages of Hadoop, which
schedules work near data to save on network load.
Amazon Elastic MapReduce
Elastic MapReduce (EMR) was introduced by Amazon in April 2009. Provisioning of the Hadoop cluster, running
and terminating jobs, and handling data transfer between EC2 and S3 are
automated by Elastic MapReduce. Apache Hive, which is built on top of Hadoop for providing data
warehouse services, is also offered in Elastic MapReduce.
Support for using Spot Instances was later added in August 2011. Elastic MapReduce is fault
tolerant for slave failures, and
it is recommended to only run the Task
Instance Group on spot instances to take advantage of the lower cost while
maintaining availability.
In June 2012, premium options for EMR
were added which replace ordinary Hadoop with MapR's M3 and M5 versions. These options provide additional
capabilities over and above what the default EMR offering provides.
Industry support of academic clusters
IBM
and Google announced an initiative in 2007 to use Hadoop to support
university courses in distributed computer programming.
In 2008 this collaboration, the Academic
Cloud Computing Initiative (ACCI), partnered with the National Science Foundation to provide grant funding to academic researchers interested
in exploring large-data applications. This resulted in the creation of the
Cluster Exploratory (CLuE) program.
Running Hadoop in compute farm environments
Instead of setting up a dedicated Hadoop
cluster, an existing compute farm can be used if the resource manager of the
cluster is aware of the Hadoop jobs, and thus Hadoop jobs can be scheduled like
other jobs in the cluster.
Grid engine integration
Integration with Sun Grid Engine was released in 2008, and running Hadoop on Sun Grid (Sun's on-demand utility computing service) was possible.
In
the initial implementation of the integration, the CPU-time scheduler has no
knowledge of the locality of the data. Unfortunately, this means that the
processing is not always done on the same rack as the data; this was a key
feature of the Hadoop Runtime. An improved integration with data-locality was
announced during the Sun HPC Software Workshop '09.
In 2008-2009 Sun released the Hadoop Live CD OpenSolaris project, which allows running a fully functional Hadoop
cluster using a live CD.
This distribution includes Hadoop 0.19
-as of April 2010 there has not been an updated release.
Condor integration
The Condor High-Throughput Computing
System integration was presented at the Condor
Week conference in 2010.
Commercially supported Hadoop-related products
There are a number of companies offering
commercial implementations and/or providing support for Hadoop.
·sqrrl offers sqrrl enterprise which extends hadoop with Apache
accumulo and combines the features of several datastores (Column + Document +
Graph).
·Pivotal offers Pivotal HD, a distribution of Hadoop that includes
HAWQ, with 100% ANSI SQL compatibility.
·IBM
offers WebSphere eXtreme Scale (formerly ObjectGrid) which includes two styles
of the HADOOP map-reduce pattern in its "agents" a.k.a. DataGrid
APIs. Together with its scalable distributed data cache capability, it gives
both map-reduce's ability to parallelize function and the ability to store
plenty of data (in memory) for the function to quickly access. It's
transactional and highly available, too.
·Talend offers Talend Open Studio for Big Data, released under the
Apache Software License, that includes native support for Apache Hadoop.
·Zettaset offers new version of its Big Data Mgt Platform based on
Hadoop. Zettaset's Big Data Platform delivers High Availability via NameNode
Failover, a streamlined UI, network Time Protocol and built in security via
Kerberos Authentication
·In
May 2010, Pentaho announced support for Apache Hadoop allowing companies to
access data integration and business analytics directly against Apache Hadoop
based distributions of Hadoop. In January 2012, Pentaho announced they made
their big data integration capabilities freely under open source, and moved the
entire Pentaho Kettle (data integration engine) project from the LGPL license
to the Apache License.
·In
March 2011, Platform Computing announced
support for the Hadoop MapReduce API in its Symphony software.
·In
May 2011, MapR Technologies, Inc. announced the availability of their
distributed file system and MapReduce engine, the MapR Distribution for Apache
Hadoop.
·
The MapR product includes most Hadoop eco-system components and adds capabilities
such as snapshots, mirrors, NFS access and full read-write file semantics.
·
MapR's distribution was selected by Amazon to provide premium versions of the
Elastic Map Reduce (EMR) service.
·Silicon Graphics International offers Hadoop optimized solutions based on the SGI Rackable
and CloudRack server lines with implementation services.
·EMC released EMC Greenplum Community Edition and EMC
Greenplum HD Enterprise Edition in May 2011. The community edition, with
optional for-fee technical support, consists of Hadoop, HDFS, HBase, Hive, and the ZooKeeper configuration service. The enterprise edition is an
offering based on the MapR product, and offers proprietary
features such as snapshots and wide area replication.
·In
June 2011, Yahoo! and Benchmark Capital formed Hortonworks Inc., whose
focus is on making Hadoop more robust and easier to install, manage and use for
enterprise users.
·In
Oct 2011, Oracle announced the Big Data Appliance,
which integrates Cloudera's Distribution Including Hadoop (CDH), Oracle Enterprise Linux,
the R programming language, and a NoSQL database with the Exadata hardware.
·OceanSync
Hadoop Management and Visualization Software allows users to control, monitor,
and visualize all aspects of a Hadoop cluster including data analytic workflow
management and output data processing visualization. The package is offered in
three versions, OceanSync Free Desktop Edition, OceanSync Enterprise Edition
with Visualization, and OceanSync Mobile for iPhone/Android devices, by Dovestech.
·Grand
Logic's JobServer product allows developers and admins to deploy, manage and
monitor their Hadoop infrastructure, with support for Hadoop job processing and
HDFS file/content management.
·Dell added Pentaho Business Analytics to the Dell Apache Hadoop
solution for big data analytics which consists of Dell servers, Dell networking
components, Dell's Crowbar cloud deployment framework open source software, and
Cloudera Distribution including Apache Hadoop (CDH).
·Microsoft is offering a developer preview of HDInsight which is a
100% Apache compatible Hadoop distribution.
·In
February 2013, Intel has released its own Hadoop distribution that takes
advantage of capabilities in Intel Xeon chips, such as its processor
instructions for accelerating AES encryption.
·Splunk offers a Hadoop integration product called Hadoop
Connect which is certified on MapR, Cloudera, Hortonworks, and Apache Hadoop. This integration allows users to search
Hadoop data from Splunk and import data from Hadoop into Splunk and vice versa.
·In
January 2012, EMC Isilon announced support for HDFS in its OneFS clustered file
system.
ASF's view on the use of "Hadoop" in product names
The Apache
Software Foundation has stated that only software officially released by the
Apache Hadoop Project can be called Apache Hadoop or Distributions of
Apache Hadoop. The naming of products and derivative works from other
vendors and the term "compatible" are somewhat controversial within
the Hadoop developer community.
Papers
Some
papers influenced the birth and growth of Hadoop and big data processing. Here
is a partial list:
·2004
MapReduce:
Simplified Data Processing on Large Clusters
by Jeffrey Dean and Sanjay Ghemawat from Google Lab. This paper inspired Doug Cutting to develop an open-source implementation of the Map-Reduce
framework. He named it Hadoop, after his son's toy elephant.
·2005
From Databases to
Dataspaces: A New Abstraction for Information Management, the authors highlight the need for storage systems to
accept all data formats and to provide APIs for data access that evolve based
on the storage system’s understanding of the data.
No comments:
Post a Comment