Upload
ayame
View
164
Download
0
Embed Size (px)
DESCRIPTION
大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment. 彭波 北京大学信息科学技术学院 4/23/2011 http://net.pku.edu.cn/~course/cs402/. Jimmy Lin University of Maryland. 课程建设. SEWM Group. - PowerPoint PPT Presentation
Citation preview
大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment
彭波北京大学信息科学技术学院4/23/2011
http://net.pku.edu.cn/~course/cs402/
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United StatesSee http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details
Jimmy LinUniversity of Maryland 课程建设 SEWMGroup
Install Hadoop
2
3
Installation Prerequisites
JAVA SDK 6 Linux OS( the only production platform)
Installation Download hadoop.x.y.z.tar.gz (we use version 0.20.2) tar xzvf hadoop.x.y.z.tar.gz
Setup Environment Variables export HADOOP_HOME=/home/hadoop export HADOOP_CONF_DIR=/home/hadoop/hadoop-config export PATH=$HADOOP_HOME/bin:$PATH export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
4
Configuration
Directory: $HADOOP_CONF_DIR core-site.xml hdfs-site.xml mapred-site.xml
5
Test HDFS
hadoop fs -ls / MapReduce
hadoop jar $HADOOP_HOME/hadoop-0.20.2-examples.jar pi 10 10000
Web Administration http://jobtracker:50030
6
Develop Environment Eclipse 3.3.2 works perfect with the plugin distributed with
hadoop.0.20.2 You’d better not to try higher version~
Online KnowledgeBase 错误汇总 《hadoop 0.20 程式開發 – eclipse plugin + Makefile》
“Hello world”
8
“Hello World”: Word Count
Map(String docid, String text): for each word w in text: Emit(w, 1);
Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value);
9
Basic Hadoop API* Mapper
void map(K1 key, V1 value, OutputCollector<K2, V2> output, Reporter reporter)
void configure(JobConf job) void close() throws IOException
Reducer/Combiner void reduce(K2 key, Iterator<V2> values,
OutputCollector<K3,V3> output, Reporter reporter) void configure(JobConf job) void close() throws IOException
Partitioner void getPartition(K2 key, V2 value, int numPartitions)
*Note: forthcoming API changes…
10
Data Types in Hadoop
Writable Defines a de/serialization protocol. Every data type in Hadoop is a Writable.
WritableComprable Defines a sort order. All keys must be of this type (but not values).
IntWritableLongWritableText…
Concrete classes for different data types.
SequenceFiles Binary encoded of a sequence of key/value pairs
11
WordCount::Mapper
12
WordCount::Reducer
13
Basic Cluster Components One of each:
Namenode (NN) Jobtracker (JT)
Set of each per slave machine: Tasktracker (TT) Datanode (DN)
14
Putting everything together…
datanode daemon
Linux file system
…
tasktracker
slave node
datanode daemon
Linux file system
…
tasktracker
slave node
datanode daemon
Linux file system
…
tasktracker
slave node
namenode
namenode daemon
job submission node
jobtracker
15
Anatomy of a Job MapReduce program in Hadoop = Hadoop job
Jobs are divided into map and reduce tasks An instance of running a task is called a task attempt Multiple jobs can be composed into a workflow
Job submission process Client (i.e., driver program) creates a job, configures it, and
submits it to job tracker JobClient computes input splits (on client end) Job data (jar, configuration XML) are sent to JobTracker JobTracker puts job data in shared location, enqueues tasks TaskTrackers poll for tasks Off to the races…
16
InputSplit
Source: redrawn from a slide by Cloduera, cc-licensed
InputSplit InputSplit
Input File Input File
InputSplit InputSplit
RecordReader RecordReader RecordReader RecordReader RecordReader
Mapper
Intermediates
Mapper
Intermediates
Mapper
Intermediates
Mapper
Intermediates
Mapper
Intermediates
Inpu
tFor
mat
17Source: redrawn from a slide by Cloduera, cc-licensed
Mapper Mapper Mapper Mapper Mapper
Partitioner Partitioner Partitioner Partitioner Partitioner
Intermediates Intermediates Intermediates Intermediates Intermediates
Reducer Reducer Reduce
Intermediates Intermediates Intermediates
(combiners omitted here)
18Source: redrawn from a slide by Cloduera, cc-licensed
Reducer Reducer Reduce
Output File
RecordWriter
Out
putF
orm
at
Output File
RecordWriter
Output File
RecordWriter
19
Input and Output InputFormat:
TextInputFormat KeyValueTextInputFormat SequenceFileInputFormat …
OutputFormat: TextOutputFormat SequenceFileOutputFormat …
20
Shuffle and Sort in Hadoop Probably the most complex aspect of MapReduce! Map side
Map outputs are buffered in memory in a circular buffer When buffer reaches threshold, contents are “spilled” to disk Spills merged in a single, partitioned file (sorted within each
partition): combiner runs here Reduce side
First, map outputs are copied over to reducer machine “Sort” is a multi-pass merge of map outputs (happens in memory
and on disk): combiner runs here Final merge pass goes directly into reducer
Shuffle and Sort
Mapper
Reducer
other mappers
other reducers
circular buffer (in memory)
spills (on disk)
merged spills (on disk)
intermediate files (on disk)
Combiner
Combiner
21
22
Hadoop Workflow
Hadoop ClusterYou
1. Load data into HDFS
2. Develop code locally
3. Submit MapReduce job3a. Go back to Step 2
4. Retrieve data from HDFS
23
Debugging Hadoop First, take a deep breath Start from unit test, then start locally Strategies
Learn to use the webapp Where does println go? Don’t use println, use logging Throw RuntimeExceptions
24
Recap Hadoop data types Anatomy of a Hadoop job Hadoop jobs, end to end Software development workflow
Q&A?Thanks you!