Getting started with HDFS client side mount table
With HDFS federation
it's possible to have multiple NameNode in a HDFS cluster. While this
is good from a NameNode scalability and isolation perspective, it's
difficult to manage multiple name spaces from a client application
perspective. HDFS client mount table makes multiple names spaces transparent to the client. ViewFs more details on how to use the HDFS client mount table.
Earlier blog entry detailed how to setup HDFS federation. Let's assume the two NameNodes have been setup successfully on namenode1 and namenode2.
Lets map
and
Add the following to the core-site.xml
Start
the cluster with the `sbin/start-dfs.sh` command from the Hadoop Home
and make sure the NameNodes and the DataNodes are working properly.
Run the following commands
Make sure that somefile.txt is in the hdfs://namenode1:9001/home/input folder from NameNode web console.
Make sure that somefile.txt is in the hdfs://namenode2:9001/home/output folder from NameNode web console.
Earlier blog entry detailed how to setup HDFS federation. Let's assume the two NameNodes have been setup successfully on namenode1 and namenode2.
Lets map
1
| /NN1Home to hdfs: //namenode1:9001/home |
1
| /NN2Home to hdfs: //namenode2:9001/home |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
< configuration > < property > < name >hadoop.tmp.dir</ name > < value >/home/praveensripati/Installations/hadoop-0.23.0/tmp</ value > </ property > < property > < name >fs.default.name</ name > < value >viewfs:///</ value > </ property > < property > < name >fs.viewfs.mounttable.default.link./NN1Home</ name > < value >hdfs://namenode1:9001/home</ value > </ property > < property > < name >fs.viewfs.mounttable.default.link./NN2Home</ name > < value >hdfs://namenode2:9001/home</ value > </ property > </ configuration > |
Run the following commands
1
| bin/hadoop fs -put somefile.txt /NN1Home/input |
1
| bin/hadoop fs -put somefile.txt /NN2Home/output |
Comments
Post a Comment
thank you for your feedback