Mapreduce 与 Hadoop 陆嘉恒 中国人民大学. 主要内容 分布式计算软件构架...

Preview:

Citation preview

Mapreduce 与 Hadoop

陆嘉恒中国人民大学

主要内容• 分布式计算软件构架 MapReduce 介绍• 分布式计算开源框架 Hadoop 介绍• 小结

MapReduce Online Evaluation• 使用 mapreduce 框架编程解决问题• 在线检测系统允许测试自己的程序• http://cloudcomputing.ruc.edu.cn/index.jsp

MapReduce: InsightMapReduce: Insight

• Consider the problem of counting the number

of occurrences of each word in a large collection of documents”

• How would you do it in parallel ?

MapReduce Programming ModelMapReduce Programming Model

• Inspired from map and reduce operations commonly

used in functional programming languages like Lisp.

• Users implement interface of two primary methods:–1. Map: (key1, val1) → (key2, val2)–2. Reduce: (key2, [val2]) → [val3]

Map operationMap operation • Map, a pure function, written by the user, takes an

input key/value pair and produces a set of intermediate key/value pairs. –e.g. (doc—id, doc-content)

• Draw an analogy to SQL, map can be visualized as group-by clause of an aggregate query.

Reduce operationReduce operation • On completion of map phase, all the

intermediate values for a given output key are combined together into a list and given to a reducer.

• Can be visualized as aggregate function (e.g., average) that is computed over all the rows with the same group-by attribute.

Pseudo-codePseudo-codemap(String input_key, String input_value): // input_key: document name // input_value: document contents

for each word w in input_value: EmitIntermediate(w, "1");

reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts

int result = 0; for each v in intermediate_values:

result += ParseInt(v); Emit(AsString(result));

MapReduce: Execution overviewMapReduce: Execution overview

MapReduce: ExampleMapReduce: Example

MapReduce in Parallel: ExampleMapReduce in Parallel: Example

MapReduce: Fault ToleranceMapReduce: Fault Tolerance• Handled via re-execution of tasks.

Task completion committed through master

• What happens if Mapper fails ?– Re-execute completed + in-progress map tasks

• What happens if Reducer fails ?– Re-execute in progress reduce tasks

• What happens if Master fails ?– Potential trouble !!

MapReduce: MapReduce:

Walk through of One more Application

MapReduce : PageRankMapReduce : PageRank

PageRank models the behavior of a “random surfer”.

C(t) is the out-degree of t, and (1-d) is a damping factor (random jump)

The “random surfer” keeps clicking on successive links at random not taking content into consideration.

Distributes its pages rank equally among all pages it links to.

The dampening factor takes the surfer “getting bored” and typing arbitrary URL.

n

i i

i

tC

tPRddxPR

1 )(

)()1()(

PageRank : Key InsightsPageRank : Key Insights

Effects at each iteration is local. i+1th iteration

depends only on ith iteration

At iteration i, PageRank for individual nodes can be computed independently

PageRank using MapReducePageRank using MapReduce

• Use Sparse matrix representation (M)

• Map each row of M to a list of PageRank “credit” to assign to out link neighbours.

• These prestige scores are reduced to a single PageRank value for a page by aggregating over them.

PageRank using MapReducePageRank using MapReduceMap: distribute PageRank “credit” to link targets

Reduce: gather up PageRank “credit” from multiple sources to compute new PageRank value

Iterate untilconvergence

Source of Image: Lin 2008

Phase 1: Phase 1: Process HTMLProcess HTML

• Map task takes (URL, page-content) pairs and

maps them to (URL, (PRinit, list-of-urls))–PRinit is the “seed” PageRank for URL–list-of-urls contains all pages pointed to by URL

• Reduce task is just the identity function

Phase 2: Phase 2: PageRank DistributionPageRank Distribution

• Reduce task gets (URL, url_list) and many

(URL, val) values–Sum vals and fix up with d to get new PR–Emit (URL, (new_rank, url_list))

• Check for convergence using non parallel component

MapReduce: Some More AppsMapReduce: Some More Apps

• Distributed Grep.

• Count of URL Access Frequency.

• Clustering (K-means)

• Graph Algorithms.

• Indexing Systems

MapReduce Programs In Google Source Tree

MapReduce: Extensions and similar MapReduce: Extensions and similar appsapps

• PIG (Yahoo)

• Hadoop (Apache)

• DryadLinq (Microsoft)

Large Scale Systems Architecture using Large Scale Systems Architecture using MapReduceMapReduce

• 分布式计算软件构架 MapReduce 介绍• 分布式计算开源框架 Hadoop 介绍• 小结

Hadoop Book

• Our new book about cloud computing and Hadoop• Download Chapter:• http://www.jiahenglu.net/course/

cloudcomputing2010/index.html

Outline

• Architecture of Hadoop Distributed File System• Hadoop usage at Facebook

Hadoop, Why?

• Need to process Multi Petabyte Datasets• Expensive to build reliability in each application.• Nodes fail every day

– Failure is expected, rather than exceptional.– The number of nodes in a cluster is not constant.

• Need common infrastructure– Efficient, reliable, Open Source Apache License

Hadoop History

• Dec 2004 – Google GFS paper published

• July 2005 – Nutch uses MapReduce

• Feb 2006 – Becomes Lucene subproject

• Apr 2007 – Yahoo! on 1000-node cluster

• Jan 2008 – An Apache Top Level Project

• Jul 2008 – A 4000 node test cluster• Sept 2008 – Hive becomes a Hadoop subproject

Who uses Hadoop?• Amazon/A9• Facebook• Google• IBM• Joost• Last.fm• New York Times• PowerSet• Veoh• Yahoo!

Commodity Hardware

Typically in 2 level architecture– Nodes are commodity PCs– 30-40 nodes/rack– Uplink from rack is 3-4 gigabit– Rack-internal is 1 gigabit

Goals of HDFS• Very Large Distributed File System

– 10K nodes, 100 million files, 10 PB• Assumes Commodity Hardware

– Files are replicated to handle hardware failure– Detect failures and recovers from them

• Optimized for Batch Processing– Data locations exposed so that computations can move to where data resides– Provides very high aggregate bandwidth

• User Space, runs on heterogeneous OS

Distributed File System• Single Namespace for entire cluster• Data Coherency

– Write-once-read-many access model– Client can only append to existing files

• Files are broken up into blocks– Typically 128 MB block size– Each block replicated on multiple DataNodes

• Intelligent Client– Client can find location of blocks– Client accesses data directly from DataNode

NameNode Metadata• Meta-data in Memory

– The entire metadata is in main memory– No demand paging of meta-data

• Types of Metadata– List of files– List of Blocks for each file– List of DataNodes for each block– File attributes, e.g creation time, replication factor

• A Transaction Log– Records file creations, file deletions. etc

DataNode• A Block Server

– Stores data in the local file system (e.g. ext3)– Stores meta-data of a block (e.g. CRC)– Serves data and meta-data to Clients

• Block Report– Periodically sends a report of all existing blocks to the NameNode

• Facilitates Pipelining of Data– Forwards data to other specified DataNodes

Block Placement

• Current Strategy

-- One replica on local node-- Second replica on a remote rack-- Third replica on same remote rack-- Additional replicas are randomly placed

• Clients read from nearest replica• Would like to make this policy pluggable

Data Correctness

• Use Checksums to validate data

– Use CRC32• File Creation

– Client computes checksum per 512 byte– DataNode stores the checksum

• File access

– Client retrieves the data and checksum from DataNode– If Validation fails, Client tries other replicas

NameNode Failure

• A single point of failure• Transaction Log stored in multiple directories

– A directory on the local file system– A directory on a remote file system (NFS/CIFS)

Data Pipelining

• Client retrieves a list of DataNodes on which to place replicas of a block

• Client writes block to the first DataNode• The first DataNode forwards the data to the next DataNode

in the Pipeline• When all replicas are written, the Client moves on to write

the next block in file

Rebalancer

• Goal: % disk full on DataNodes should be similar– Usually run when new DataNodes are added– Cluster is online when Rebalancer is active– Rebalancer is to avoid network congestion

Hadoop at Facebook• Production cluster

– 4800 cores, 600 machines, 16GB per machine – April 2009– 8000 cores, 1000 machines, 32 GB per machine – July 2009– 4 SATA disks of 1 TB each per machine– 2 level network hierarchy, 40 machines per rack– Total cluster size is 2 PB, projected to be 12 PB in Q3 2009

• Test cluster• 800 cores, 16GB each

小结• 分布式计算软件构架 MapReduce• 分布式计算开源框架 Hadoop

谢谢!

Recommended