Java訪問(wèn)Hadoop分布式文件系統(tǒng)HDFS的配置說(shuō)明
配置文件
m103替換為hdfs服務(wù)地址。
要利用Java客戶端來(lái)存取HDFS上的文件,不得不說(shuō)的是配置文件hadoop-0.20.2/conf/core-site.xml了,最初我就是在這里吃了大虧,所以我死活連不上HDFS,文件無(wú)法創(chuàng)建、讀取。
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <!--- global properties --> <property> <name>hadoop.tmp.dir</name> <value>/home/zhangzk/hadoop</value> <description>A base for other temporary directories.</description> </property> <!-- file system properties --> <property> <name>fs.default.name</name> <value>hdfs://linux-zzk-113:9000</value> </property> </configuration>
配置項(xiàng):hadoop.tmp.dir表示命名節(jié)點(diǎn)上存放元數(shù)據(jù)的目錄位置,對(duì)于數(shù)據(jù)節(jié)點(diǎn)則為該節(jié)點(diǎn)上存放文件數(shù)據(jù)的目錄。
配置項(xiàng):fs.default.name表示命名的IP地址和端口號(hào),缺省值是file:///,對(duì)于JavaAPI來(lái)講,連接HDFS必須使用這里的配置的URL地址,對(duì)于數(shù)據(jù)節(jié)點(diǎn)來(lái)講,數(shù)據(jù)節(jié)點(diǎn)通過(guò)該URL來(lái)訪問(wèn)命名節(jié)點(diǎn)。
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera Manager--> <configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:///mnt/sdc1/dfs/nn</value> </property> <property> <name>dfs.namenode.servicerpc-address</name> <value>m103:8022</value> </property> <property> <name>dfs.https.address</name> <value>m103:50470</value> </property> <property> <name>dfs.https.port</name> <value>50470</value> </property> <property> <name>dfs.namenode.http-address</name> <value>m103:50070</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.blocksize</name> <value>134217728</value> </property> <property> <name>dfs.client.use.datanode.hostname</name> <value>false</value> </property> <property> <name>fs.permissions.umask-mode</name> <value>022</value> </property> <property> <name>dfs.namenode.acls.enabled</name> <value>false</value> </property> <property> <name>dfs.block.local-path-access.user</name> <value>cloudera-scm</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>false</value> </property> <property> <name>dfs.domain.socket.path</name> <value>/var/run/hdfs-sockets/dn</value> </property> <property> <name>dfs.client.read.shortcircuit.skip.checksum</name> <value>false</value> </property> <property> <name>dfs.client.domain.socket.data.traffic</name> <value>false</value> </property> <property> <name>dfs.datanode.hdfs-blocks-metadata.enabled</name> <value>true</value> </property> <property> <name>fs.http.impl</name> <value>com.scistor.datavision.fs.HTTPFileSystem</value> </property> </configuration>
mapred-site.xml
<?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera Manager--> <configuration> <property> <name>mapreduce.job.split.metainfo.maxsize</name> <value>10000000</value> </property> <property> <name>mapreduce.job.counters.max</name> <value>120</value> </property> <property> <name>mapreduce.output.fileoutputformat.compress</name> <value>true</value> </property> <property> <name>mapreduce.output.fileoutputformat.compress.type</name> <value>BLOCK</value> </property> <property> <name>mapreduce.output.fileoutputformat.compress.codec</name> <value>org.apache.hadoop.io.compress.SnappyCodec</value> </property> <property> <name>mapreduce.map.output.compress.codec</name> <value>org.apache.hadoop.io.compress.SnappyCodec</value> </property> <property> <name>mapreduce.map.output.compress</name> <value>true</value> </property> <property> <name>zlib.compress.level</name> <value>DEFAULT_COMPRESSION</value> </property> <property> <name>mapreduce.task.io.sort.factor</name> <value>64</value> </property> <property> <name>mapreduce.map.sort.spill.percent</name> <value>0.8</value> </property> <property> <name>mapreduce.reduce.shuffle.parallelcopies</name> <value>10</value> </property> <property> <name>mapreduce.task.timeout</name> <value>600000</value> </property> <property> <name>mapreduce.client.submit.file.replication</name> <value>1</value> </property> <property> <name>mapreduce.job.reduces</name> <value>24</value> </property> <property> <name>mapreduce.task.io.sort.mb</name> <value>256</value> </property> <property> <name>mapreduce.map.speculative</name> <value>false</value> </property> <property> <name>mapreduce.reduce.speculative</name> <value>false</value> </property> <property> <name>mapreduce.job.reduce.slowstart.completedmaps</name> <value>0.8</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>m103:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>m103:19888</value> </property> <property> <name>mapreduce.jobhistory.webapp.https.address</name> <value>m103:19890</value> </property> <property> <name>mapreduce.jobhistory.admin.address</name> <value>m103:10033</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.staging-dir</name> <value>/user</value> </property> <property> <name>mapreduce.am.max-attempts</name> <value>2</value> </property> <property> <name>yarn.app.mapreduce.am.resource.mb</name> <value>2048</value> </property> <property> <name>yarn.app.mapreduce.am.resource.cpu-vcores</name> <value>1</value> </property> <property> <name>mapreduce.job.ubertask.enable</name> <value>false</value> </property> <property> <name>yarn.app.mapreduce.am.command-opts</name> <value>-Djava.net.preferIPv4Stack=true -Xmx1717986918</value> </property> <property> <name>mapreduce.map.java.opts</name> <value>-Djava.net.preferIPv4Stack=true -Xmx1717986918</value> </property> <property> <name>mapreduce.reduce.java.opts</name> <value>-Djava.net.preferIPv4Stack=true -Xmx2576980378</value> </property> <property> <name>yarn.app.mapreduce.am.admin.user.env</name> <value>LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATH</value> </property> <property> <name>mapreduce.map.memory.mb</name> <value>2048</value> </property> <property> <name>mapreduce.map.cpu.vcores</name> <value>1</value> </property> <property> <name>mapreduce.reduce.memory.mb</name> <value>3072</value> </property> <property> <name>mapreduce.reduce.cpu.vcores</name> <value>1</value> </property> <property> <name>mapreduce.application.classpath</name> <value>$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$MR2_CLASSPATH,$CDH_HCAT_HOME/share/hcatalog/*,$CDH_HIVE_HOME/lib/*,/etc/hive/conf,/opt/cloudera/parcels/CDH/lib/udps/*</value> </property> <property> <name>mapreduce.admin.user.env</name> <value>LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATH</value> </property> <property> <name>mapreduce.shuffle.max.connections</name> <value>80</value> </property> </configuration>
利用JavaAPI來(lái)訪問(wèn)HDFS的文件與目錄
package com.demo.hdfs;
import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.util.Progressable;
/**
* @author zhangzk
*
*/
public class FileCopyToHdfs {
public static void main(String[] args) throws Exception {
try {
//uploadToHdfs();
//deleteFromHdfs();
//getDirectoryFromHdfs();
appendToHdfs();
readFromHdfs();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
finally
{
System.out.println("SUCCESS");
}
}
/**上傳文件到HDFS上去*/
private static void uploadToHdfs() throws FileNotFoundException,IOException {
String localSrc = "d://qq.txt";
String dst = "hdfs://192.168.0.113:9000/user/zhangzk/qq.txt";
InputStream in = new BufferedInputStream(new FileInputStream(localSrc));
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(dst), conf);
OutputStream out = fs.create(new Path(dst), new Progressable() {
public void progress() {
System.out.print(".");
}
});
IOUtils.copyBytes(in, out, 4096, true);
}
/**從HDFS上讀取文件*/
private static void readFromHdfs() throws FileNotFoundException,IOException {
String dst = "hdfs://192.168.0.113:9000/user/zhangzk/qq.txt";
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(dst), conf);
FSDataInputStream hdfsInStream = fs.open(new Path(dst));
OutputStream out = new FileOutputStream("d:/qq-hdfs.txt");
byte[] ioBuffer = new byte[1024];
int readLen = hdfsInStream.read(ioBuffer);
while(-1 != readLen){
out.write(ioBuffer, 0, readLen);
readLen = hdfsInStream.read(ioBuffer);
}
out.close();
hdfsInStream.close();
fs.close();
}
/**以append方式將內(nèi)容添加到HDFS上文件的末尾;注意:文件更新,需要在hdfs-site.xml中添<property><name>dfs.append.support</name><value>true</value></property>*/
private static void appendToHdfs() throws FileNotFoundException,IOException {
String dst = "hdfs://192.168.0.113:9000/user/zhangzk/qq.txt";
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(dst), conf);
FSDataOutputStream out = fs.append(new Path(dst));
int readLen = "zhangzk add by hdfs java api".getBytes().length;
while(-1 != readLen){
out.write("zhangzk add by hdfs java api".getBytes(), 0, readLen);
}
out.close();
fs.close();
}
/**從HDFS上刪除文件*/
private static void deleteFromHdfs() throws FileNotFoundException,IOException {
String dst = "hdfs://192.168.0.113:9000/user/zhangzk/qq-bak.txt";
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(dst), conf);
fs.deleteOnExit(new Path(dst));
fs.close();
}
/**遍歷HDFS上的文件和目錄*/
private static void getDirectoryFromHdfs() throws FileNotFoundException,IOException {
String dst = "hdfs://192.168.0.113:9000/user/zhangzk";
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(dst), conf);
FileStatus fileList[] = fs.listStatus(new Path(dst));
int size = fileList.length;
for(int i = 0; i < size; i++){
System.out.println("name:" + fileList[i].getPath().getName() + "/t/tsize:" + fileList[i].getLen());
}
fs.close();
}
}
- Java實(shí)現(xiàn)文件上傳到服務(wù)器本地并通過(guò)url訪問(wèn)的方法步驟
- JavaWeb踩坑記錄之項(xiàng)目訪問(wèn)不到html文件
- 使用Java7的Files工具類和Path接口來(lái)訪問(wèn)文件的方法
- Java編程利用socket多線程訪問(wèn)服務(wù)器文件代碼示例
- Spring MVC訪問(wèn)靜態(tài)文件_動(dòng)力節(jié)點(diǎn)Java學(xué)院整理
- Java class文件格式之訪問(wèn)標(biāo)志信息_動(dòng)力節(jié)點(diǎn)Java學(xué)院整理
- Java文件拒絕訪問(wèn)問(wèn)題及解決
相關(guān)文章
servlet異步請(qǐng)求的實(shí)現(xiàn)
本文主要介紹了servlet異步請(qǐng)求的實(shí)現(xiàn),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2022-07-07
mybatis查詢實(shí)現(xiàn)返回List<Map>類型數(shù)據(jù)操作
這篇文章主要介紹了mybatis查詢實(shí)現(xiàn)返回List<Map>類型數(shù)據(jù)操作,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2020-11-11
java實(shí)現(xiàn)發(fā)送郵箱驗(yàn)證碼
這篇文章主要為大家詳細(xì)介紹了java實(shí)現(xiàn)發(fā)送郵箱驗(yàn)證碼,文中示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下2021-08-08
SpringCloud LoadBalancer自定義負(fù)載均衡器使用解析
LoadBalancerClient 是 SpringCloud 提供的一種負(fù)載均衡客戶端,Ribbon 負(fù)載均衡組件內(nèi)部也是集成了 LoadBalancerClient 來(lái)實(shí)現(xiàn)負(fù)載均衡,本文給大家深入解析 LoadBalancerClient 接口源碼,感興趣的朋友跟隨小編一起看看吧2023-04-04
淺談兩個(gè)jar包中包含完全相同的包名和類名的加載問(wèn)題
下面小編就為大家?guī)?lái)一篇淺談兩個(gè)jar包中包含完全相同的包名和類名的加載問(wèn)題。小編覺(jué)得挺不錯(cuò)的,現(xiàn)在就分享給大家,也給大家做個(gè)參考。一起跟隨小編過(guò)來(lái)看看吧2017-09-09
通過(guò)實(shí)例了解Java 8創(chuàng)建Stream流的5種方法
這篇文章主要介紹了通過(guò)實(shí)例了解Java 8創(chuàng)建Stream流的5種方法,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2019-12-12

