eclipse/intellij idea 遠(yuǎn)程調(diào)試hadoop 2.6.0
很多hadoop初學(xué)者估計(jì)都我一樣,由于沒(méi)有足夠的機(jī)器資源,只能在虛擬機(jī)里弄一個(gè)linux安裝hadoop的偽分布,然后在host機(jī)上win7里使用eclipse或Intellj idea來(lái)寫代碼測(cè)試,那么問(wèn)題來(lái)了,win7下的eclipse或intellij idea如何遠(yuǎn)程提交map/reduce任務(wù)到遠(yuǎn)程hadoop,并斷點(diǎn)調(diào)試?
一、準(zhǔn)備工作
1.1 在win7中,找一個(gè)目錄,解壓hadoop-2.6.0,本文中是D:\yangjm\Code\study\hadoop\hadoop-2.6.0 (以下用$HADOOP_HOME表示)
1.2 在win7中添加幾個(gè)環(huán)境變量
HADOOP_HOME=D:\yangjm\Code\study\hadoop\hadoop-2.6.0
HADOOP_BIN_PATH=%HADOOP_HOME%\bin
HADOOP_PREFIX=D:\yangjm\Code\study\hadoop\hadoop-2.6.0
另外,PATH變量在最后追加;%HADOOP_HOME%\bin
二、eclipse遠(yuǎn)程調(diào)試
1.1 下載hadoop-eclipse-plugin插件
hadoop-eclipse-plugin是一個(gè)專門用于eclipse的hadoop插件,可以直接在IDE環(huán)境中查看hdfs的目錄和文件內(nèi)容。其源代碼托管于github上,官網(wǎng)地址是 https://github.com/winghc/hadoop2x-eclipse-plugin
有興趣的可以自己下載源碼編譯,百度一下N多文章,但如果只是使用 https://github.com/winghc/hadoop2x-eclipse-plugin/tree/master/release%20這里已經(jīng)提供了各種編譯好的版本,直接用就行,將下載后的hadoop-eclipse-plugin-2.6.0.jar復(fù)制到eclipse/plugins目錄下,然后重啟eclipse就完事了
1.2 下載windows64位平臺(tái)的hadoop2.6插件包(hadoop.dll,winutils.exe)
在hadoop2.6.0源碼的hadoop-common-project\hadoop-common\src\main\winutils下,有一個(gè)vs.net工程,編譯這個(gè)工程可以得到這一堆文件,輸出的文件中,
hadoop.dll、winutils.exe 這二個(gè)最有用,將winutils.exe復(fù)制到$HADOOP_HOME\bin目錄,將hadoop.dll復(fù)制到%windir%\system32目錄 (主要是防止插件報(bào)各種莫名錯(cuò)誤,比如空對(duì)象引用啥的)
注:如果不想編譯,可直接下載編譯好的文件 hadoop2.6(x64)V0.2.rar
1.3 配置hadoop-eclipse-plugin插件
啟動(dòng)eclipse,windows->show view->other

window->preferences->hadoop map/reduce 指定win7上的hadoop根目錄(即:$HADOOP_HOME)

然后在Map/Reduce Locations 面板中,點(diǎn)擊小象圖標(biāo)

添加一個(gè)Location

這個(gè)界面灰常重要,解釋一下幾個(gè)參數(shù):
Location name 這里就是起個(gè)名字,隨便起
Map/Reduce(V2) Master Host 這里就是虛擬機(jī)里hadoop master對(duì)應(yīng)的IP地址,下面的端口對(duì)應(yīng) hdfs-site.xml里dfs.datanode.ipc.address屬性所指定的端口
DFS Master Port: 這里的端口,對(duì)應(yīng)core-site.xml里fs.defaultFS所指定的端口
最后的user name要跟虛擬機(jī)里運(yùn)行hadoop的用戶名一致,我是用hadoop身份安裝運(yùn)行hadoop 2.6.0的,所以這里填寫hadoop,如果你是用root安裝的,相應(yīng)的改成root
這些參數(shù)指定好以后,點(diǎn)擊Finish,eclipse就知道如何去連接hadoop了,一切順利的話,在Project Explorer面板中,就能看到hdfs里的目錄和文件了

可以在文件上右擊,選擇刪除試下,通常第一次是不成功的,會(huì)提示一堆東西,大意是權(quán)限不足之類,原因是當(dāng)前的win7登錄用戶不是虛擬機(jī)里hadoop的運(yùn)行用戶,解決辦法有很多,比如你可以在win7上新建一個(gè)hadoop的管理員用戶,然后切換成hadoop登錄win7,再使用eclipse開發(fā),但是這樣太煩,最簡(jiǎn)單的辦法:
hdfs-site.xml里添加
<property> <name>dfs.permissions</name> <value>false</value> </property>
然后在虛擬機(jī)里,運(yùn)行hadoop dfsadmin -safemode leave
保險(xiǎn)起見(jiàn),再來(lái)一個(gè) hadoop fs -chmod 777 /
總而言之,就是徹底把hadoop的安全檢測(cè)關(guān)掉(學(xué)習(xí)階段不需要這些,正式生產(chǎn)上時(shí),不要這么干),最后重啟hadoop,再到eclipse里,重復(fù)剛才的刪除文件操作試下,應(yīng)該可以了。
1.4 創(chuàng)建WoldCount示例項(xiàng)目
新建一個(gè)項(xiàng)目,選擇Map/Reduce Project

后面的Next就行了,然后放一上WodCount.java,代碼如下:
package yjmyzz;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length < 2) {
System.err.println("Usage: wordcount <in> [<in>...] <out>");
System.exit(2);
}
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
for (int i = 0; i < otherArgs.length - 1; ++i) {
FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
}
FileOutputFormat.setOutputPath(job,
new Path(otherArgs[otherArgs.length - 1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
然后再放一個(gè)log4j.properties,內(nèi)容如下:(為了方便運(yùn)行起來(lái)后,查看各種輸出)
log4j.rootLogger=INFO, stdout
#log4j.logger.org.springframework=INFO
#log4j.logger.org.apache.activemq=INFO
#log4j.logger.org.apache.activemq.spring=WARN
#log4j.logger.org.apache.activemq.store.journal=INFO
#log4j.logger.org.activeio.journal=INFO
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} | %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
最終的目錄結(jié)構(gòu)如下:

然后可以Run了,當(dāng)然是不會(huì)成功的,因?yàn)闆](méi)給WordCount輸入?yún)?shù),參考下圖:
1.5 設(shè)置運(yùn)行參數(shù)

因?yàn)閃ordCount是輸入一個(gè)文件用于統(tǒng)計(jì)單詞字,然后輸出到另一個(gè)文件夾下,所以給二個(gè)參數(shù),參考上圖,在Program arguments里,輸入
hdfs://172.28.20.xxx:9000/jimmy/input/README.txt
hdfs://172.28.20.xxx:9000/jimmy/output/
大家參考這個(gè)改一下(主要是把IP換成自己虛擬機(jī)里的IP),注意的是,如果input/READM.txt文件沒(méi)有,請(qǐng)先手動(dòng)上傳,然后/output/ 必須是不存在的,否則程序運(yùn)行到最后,發(fā)現(xiàn)目標(biāo)目錄存在,也會(huì)報(bào)錯(cuò),這個(gè)弄完后,可以在適當(dāng)?shù)奈恢么騻€(gè)斷點(diǎn),終于可以調(diào)試了:

三、intellij idea 遠(yuǎn)程調(diào)試hadoop
3.1 創(chuàng)建一個(gè)maven的WordCount項(xiàng)目
pom文件如下:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>yjmyzz</groupId>
<artifactId>mapreduce-helloworld</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-jobclient</artifactId>
<version>2.6.0</version>
</dependency>
<dependency>
<groupId>commons-cli</groupId>
<artifactId>commons-cli</artifactId>
<version>1.2</version>
</dependency>
</dependencies>
<build>
<finalName>${project.artifactId}</finalName>
</build>
</project>
項(xiàng)目結(jié)構(gòu)如下:

項(xiàng)目上右擊-》Open Module Settings 或按F12,打開模塊屬性

添加依賴的Libary引用

然后把$HADOOP_HOME下的對(duì)應(yīng)包全導(dǎo)進(jìn)來(lái)

導(dǎo)入的libary可以起個(gè)名稱,比如hadoop2.6

3.2 設(shè)置運(yùn)行參數(shù)

注意二個(gè)地方:
1是Program aguments,這里跟eclipes類似的做法,指定輸入文件和輸出文件夾
2是Working Directory,即工作目錄,指定為$HADOOP_HOME所在目錄
然后就可以調(diào)試了

intellij下唯一不爽的,由于沒(méi)有類似eclipse的hadoop插件,每次運(yùn)行完wordcount,下次再要運(yùn)行時(shí),只能手動(dòng)命令行刪除output目錄,再行調(diào)試。為了解決這個(gè)問(wèn)題,可以將WordCount代碼改進(jìn)一下,在運(yùn)行前先刪除output目錄,見(jiàn)下面的代碼:
package yjmyzz;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
/**
* 刪除指定目錄
*
* @param conf
* @param dirPath
* @throws IOException
*/
private static void deleteDir(Configuration conf, String dirPath) throws IOException {
FileSystem fs = FileSystem.get(conf);
Path targetPath = new Path(dirPath);
if (fs.exists(targetPath)) {
boolean delResult = fs.delete(targetPath, true);
if (delResult) {
System.out.println(targetPath + " has been deleted sucessfullly.");
} else {
System.out.println(targetPath + " deletion failed.");
}
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length < 2) {
System.err.println("Usage: wordcount <in> [<in>...] <out>");
System.exit(2);
}
//先刪除output目錄
deleteDir(conf, otherArgs[otherArgs.length - 1]);
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
for (int i = 0; i < otherArgs.length - 1; ++i) {
FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
}
FileOutputFormat.setOutputPath(job,
new Path(otherArgs[otherArgs.length - 1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
但是光這樣還不夠,在IDE環(huán)境中運(yùn)行時(shí),IDE需要知道去連哪一個(gè)hdfs實(shí)例(就好象在db開發(fā)中,需要在配置xml中指定DataSource一樣的道理),將$HADOOP_HOME\etc\hadoop下的core-site.xml,復(fù)制到resouces目錄下,類似下面這樣:

里面的內(nèi)容如下:
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://172.28.20.***:9000</value> </property> </configuration>
上面的IP換成虛擬機(jī)里的IP即可。
相關(guān)文章
Springboot jar文件如何打包zip在linux環(huán)境運(yùn)行
這篇文章主要介紹了Springboot jar文件如何打包zip在linux環(huán)境運(yùn)行,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2020-02-02
Java基于享元模式實(shí)現(xiàn)五子棋游戲功能實(shí)例詳解
這篇文章主要介紹了Java基于享元模式實(shí)現(xiàn)五子棋游戲功能,較為詳細(xì)的分析了享元模式的概念、功能并結(jié)合實(shí)例形式詳細(xì)分析了Java使用享元模式實(shí)現(xiàn)五子棋游戲的具體操作步驟與相關(guān)注意事項(xiàng),需要的朋友可以參考下2018-05-05
java -D參數(shù)設(shè)置系統(tǒng)屬性無(wú)效問(wèn)題及解決
這篇文章主要介紹了java -D參數(shù)設(shè)置系統(tǒng)屬性無(wú)效問(wèn)題及解決方案,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2022-12-12

