Tuesday, August 30, 2016

Apache Ant & Ivy installation


ANT - A build of Java Application
IVY - Dependency manager

Install Ant

nutch@45883500b170:~$ pwd  
/home/nutch
nutch@45883500b170:~$ tar xzvf /software/apache-ivy-2.4.0-bin-with-deps.tar.gz 
...
nutch@45883500b170:~$ tar xzvf /software/apache-ant-1.9.7-bin.tar.gz 
...
nutch@45883500b170:~$ ls -l
total 12
drwxr-xr-x 6 nutch nutch 4096 Apr  9 06:38 apache-ant-1.9.7
drwxr-xr-x 5 nutch nutch 4096 Aug 30 07:10 apache-ivy-2.4.0
drwxr-xr-x 7 nutch nutch 4096 Aug 30 05:48 apache-nutch-2.3.1
nutch@45883500b170:~/apache-ant-1.9.7$ tail ~/.bashrc
  if [ -f /usr/share/bash-completion/bash_completion ]; then
    . /usr/share/bash-completion/bash_completion
  elif [ -f /etc/bash_completion ]; then
    . /etc/bash_completion
  fi
fi
JAVA_HOME=/usr/lib/jvm/java-8-oracle
ANT_HOME=/home/nutch/apache-ant-1.9.7
PATH=$ANT_HOME/bin:$PATH
export JAVA_HOME ANT_HOME PATH



nutch@45883500b170:/$ cd
nutch@45883500b170:~$ 
nutch@45883500b170:~$ cd apache-ant-1.9.7/
nutch@45883500b170:~/apache-ant-1.9.7$ 
nutch@45883500b170:~/apache-ant-1.9.7$ 
nutch@45883500b170:~/apache-ant-1.9.7$ ant -f fetch.xml -Ddest=system
Buildfile: /home/nutch/apache-ant-1.9.7/fetch.xml

pick-dest:
     [echo] Downloading to /home/nutch/apache-ant-1.9.7/lib

probe-m2:

download-m2:
     [echo] Downloading to /home/nutch/apache-ant-1.9.7/lib
      [get] Getting: http://repo1.maven.org/maven2/org/apache/maven/maven-artifact-ant/2.0.4/maven-artifact-ant-2.0.4-dep.jar
      [get] To: /home/nutch/apache-ant-1.9.7/lib/maven-artifact-ant-2.0.4-dep.jar
      [get] ....................................................
      [get] ...................................

dont-validate-m2-checksum:

validate-m2-checksum:

checksum-mismatch:

checksum-match:

get-m2:

macros:

init:

logging:
[artifact:dependencies] Downloading: log4j/log4j/1.2.14/log4j-1.2.14.pom
[artifact:dependencies] Transferring 2K
[artifact:dependencies] Downloading: log4j/log4j/1.2.14/log4j-1.2.14.jar
[artifact:dependencies] Transferring 358K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib
[artifact:dependencies] Downloading: commons-logging/commons-logging-api/1.1/commons-logging-api-1.1.pom
[artifact:dependencies] Transferring 5K
[artifact:dependencies] Downloading: commons-logging/commons-logging-api/1.1/commons-logging-api-1.1.jar
[artifact:dependencies] Transferring 43K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

junit:
[artifact:dependencies] Downloading: junit/junit/4.11/junit-4.11.pom
[artifact:dependencies] Transferring 2K
[artifact:dependencies] Downloading: org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: org/hamcrest/hamcrest-parent/1.3/hamcrest-parent-1.3.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar
[artifact:dependencies] Transferring 43K
[artifact:dependencies] Downloading: junit/junit/4.11/junit-4.11.jar
[artifact:dependencies] Transferring 239K
     [copy] Copying 2 files to /home/nutch/apache-ant-1.9.7/lib

xml:
[artifact:dependencies] Downloading: xalan/xalan/2.7.1/xalan-2.7.1.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: org/apache/apache/4/apache-4.pom
[artifact:dependencies] Transferring 4K
[artifact:dependencies] Downloading: xalan/serializer/2.7.1/serializer-2.7.1.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: xml-apis/xml-apis/1.3.04/xml-apis-1.3.04.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: org/apache/apache/3/apache-3.pom
[artifact:dependencies] Transferring 3K
[artifact:dependencies] Downloading: xalan/serializer/2.7.1/serializer-2.7.1.jar
[artifact:dependencies] Transferring 271K
[artifact:dependencies] Downloading: xml-apis/xml-apis/1.3.04/xml-apis-1.3.04.jar
[artifact:dependencies] Transferring 189K
[artifact:dependencies] Downloading: xalan/xalan/2.7.1/xalan-2.7.1.jar
[artifact:dependencies] Transferring 3101K
     [copy] Copying 3 files to /home/nutch/apache-ant-1.9.7/lib
[artifact:dependencies] Downloading: xml-resolver/xml-resolver/1.2/xml-resolver-1.2.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: xml-resolver/xml-resolver/1.2/xml-resolver-1.2.jar
[artifact:dependencies] Transferring 82K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

networking:
[artifact:dependencies] Downloading: commons-net/commons-net/1.4.1/commons-net-1.4.1.pom
[artifact:dependencies] Transferring 4K
[artifact:dependencies] Downloading: oro/oro/2.0.8/oro-2.0.8.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: oro/oro/2.0.8/oro-2.0.8.jar
[artifact:dependencies] Transferring 63K
[artifact:dependencies] Downloading: commons-net/commons-net/1.4.1/commons-net-1.4.1.jar
[artifact:dependencies] Transferring 176K
     [copy] Copying 2 files to /home/nutch/apache-ant-1.9.7/lib
[artifact:dependencies] Downloading: com/jcraft/jsch/0.1.50/jsch-0.1.50.pom
[artifact:dependencies] Transferring 3K
[artifact:dependencies] Downloading: org/sonatype/oss/oss-parent/6/oss-parent-6.pom
[artifact:dependencies] Transferring 4K
[artifact:dependencies] Downloading: com/jcraft/jsch/0.1.50/jsch-0.1.50.jar
[artifact:dependencies] Transferring 248K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

regexp:
[artifact:dependencies] Downloading: regexp/regexp/1.3/regexp-1.3.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: regexp/regexp/1.3/regexp-1.3.jar
[artifact:dependencies] Transferring 24K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

antlr:
[artifact:dependencies] Downloading: antlr/antlr/2.7.7/antlr-2.7.7.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: antlr/antlr/2.7.7/antlr-2.7.7.jar
[artifact:dependencies] Transferring 434K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

bcel:
[artifact:dependencies] Downloading: bcel/bcel/5.1/bcel-5.1.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: regexp/regexp/1.2/regexp-1.2.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: regexp/regexp/1.2/regexp-1.2.jar
[artifact:dependencies] Transferring 29K
[artifact:dependencies] Downloading: bcel/bcel/5.1/bcel-5.1.jar
[artifact:dependencies] Transferring 503K
     [copy] Copying 2 files to /home/nutch/apache-ant-1.9.7/lib

jdepend:
[artifact:dependencies] Downloading: jdepend/jdepend/2.9.1/jdepend-2.9.1.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: jdepend/jdepend/2.9.1/jdepend-2.9.1.jar
[artifact:dependencies] Transferring 56K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

bsf:
[artifact:dependencies] Downloading: bsf/bsf/2.4.0/bsf-2.4.0.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: commons-logging/commons-logging/1.0.4/commons-logging-1.0.4.pom
[artifact:dependencies] Transferring 5K
[artifact:dependencies] Downloading: bsf/bsf/2.4.0/bsf-2.4.0.jar
[artifact:dependencies] Transferring 110K
[artifact:dependencies] Downloading: commons-logging/commons-logging/1.0.4/commons-logging-1.0.4.jar
[artifact:dependencies] Transferring 37K
     [copy] Copying 2 files to /home/nutch/apache-ant-1.9.7/lib

debugging:
[artifact:dependencies] Downloading: which/which/1.0/which-1.0.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: which/which/1.0/which-1.0.jar
[artifact:dependencies] Transferring 16K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

jruby:
[artifact:dependencies] Downloading: org/jruby/jruby/0.9.8/jruby-0.9.8.pom
[artifact:dependencies] Transferring 5K
[artifact:dependencies] Downloading: org/jruby/shared/0.9.8/shared-0.9.8.pom
[artifact:dependencies] Transferring 4K
[artifact:dependencies] Downloading: asm/asm-commons/2.2.3/asm-commons-2.2.3.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: asm/asm-parent/2.2.3/asm-parent-2.2.3.pom
[artifact:dependencies] Transferring 2K
[artifact:dependencies] Downloading: asm/asm-tree/2.2.3/asm-tree-2.2.3.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: asm/asm/2.2.3/asm-2.2.3.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: asm/asm/2.2.3/asm-2.2.3.jar
[artifact:dependencies] Transferring 34K
[artifact:dependencies] Downloading: asm/asm-tree/2.2.3/asm-tree-2.2.3.jar
[artifact:dependencies] Transferring 15K
[artifact:dependencies] Downloading: org/jruby/jruby/0.9.8/jruby-0.9.8.jar
[artifact:dependencies] Transferring 1644K
[artifact:dependencies] Downloading: asm/asm-commons/2.2.3/asm-commons-2.2.3.jar
[artifact:dependencies] Transferring 14K
     [copy] Copying 4 files to /home/nutch/apache-ant-1.9.7/lib

beanshell:
[artifact:dependencies] Downloading: org/beanshell/bsh/2.0b4/bsh-2.0b4.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: org/beanshell/beanshell/2.0b4/beanshell-2.0b4.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: org/beanshell/bsh/2.0b4/bsh-2.0b4.jar
[artifact:dependencies] Transferring 275K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib
[artifact:dependencies] Downloading: org/beanshell/bsh-core/2.0b4/bsh-core-2.0b4.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: org/beanshell/bsh-core/2.0b4/bsh-core-2.0b4.jar
[artifact:dependencies] Transferring 140K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

rhino:
[artifact:dependencies] Downloading: rhino/js/1.6R7/js-1.6R7.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: rhino/js/1.6R7/js-1.6R7.jar
[artifact:dependencies] Transferring 794K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

script:

javamail:
[artifact:dependencies] Downloading: javax/mail/mail/1.4/mail-1.4.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: javax/activation/activation/1.1/activation-1.1.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: javax/mail/mail/1.4/mail-1.4.jar
[artifact:dependencies] Transferring 379K
[artifact:dependencies] Downloading: javax/activation/activation/1.1/activation-1.1.jar
[artifact:dependencies] Transferring 61K
     [copy] Copying 2 files to /home/nutch/apache-ant-1.9.7/lib

jspc:
[artifact:dependencies] Downloading: tomcat/jasper-compiler/4.1.36/jasper-compiler-4.1.36.pom
[artifact:dependencies] [WARNING] Unable to get resource from repository remote (http://repo1.maven.org/maven2/)
[artifact:dependencies] Downloading: tomcat/jasper-compiler/4.1.36/jasper-compiler-4.1.36.pom
[artifact:dependencies] [WARNING] Unable to get resource from repository central (http://repo1.maven.org/maven2)
[artifact:dependencies] Downloading: tomcat/jasper-compiler/4.1.36/jasper-compiler-4.1.36.jar
[artifact:dependencies] Transferring 179K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib
[artifact:dependencies] Downloading: tomcat/jasper-runtime/4.1.36/jasper-runtime-4.1.36.pom
[artifact:dependencies] [WARNING] Unable to get resource from repository remote (http://repo1.maven.org/maven2/)
[artifact:dependencies] Downloading: tomcat/jasper-runtime/4.1.36/jasper-runtime-4.1.36.pom
[artifact:dependencies] [WARNING] Unable to get resource from repository central (http://repo1.maven.org/maven2)
[artifact:dependencies] Downloading: tomcat/jasper-runtime/4.1.36/jasper-runtime-4.1.36.jar
[artifact:dependencies] Transferring 70K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib
[artifact:dependencies] Downloading: javax/servlet/servlet-api/2.3/servlet-api-2.3.pom
[artifact:dependencies] Transferring 0K
[artifact:dependencies] Downloading: javax/servlet/servlet-api/2.3/servlet-api-2.3.jar
[artifact:dependencies] Transferring 76K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

jai:
[artifact:dependencies] Downloading: javax/media/jai-core/1.1.3/jai-core-1.1.3.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: javax/media/jai-core/1.1.3/jai-core-1.1.3.jar
[artifact:dependencies] Transferring 1856K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib
[artifact:dependencies] Downloading: com/sun/media/jai-codec/1.1.3/jai-codec-1.1.3.pom
[artifact:dependencies] Transferring 1K
[artifact:dependencies] Downloading: com/sun/media/jai-codec/1.1.3/jai-codec-1.1.3.jar
[artifact:dependencies] Transferring 252K
     [copy] Copying 1 file to /home/nutch/apache-ant-1.9.7/lib

nonm2-macros:

init-no-m2:

init-cache:

-setup-temp-cache:
    [mkdir] Created dir: /home/nutch/.ant/tempcache

-fetch-netrexx:

-fetch-netrexx-no-commons-net:
      [get] Getting: ftp://ftp.software.ibm.com/software/awdtools/netrexx/NetRexx.zip
      [get] To: /home/nutch/.ant/tempcache/NetRexx.zip
      [get] Error getting ftp://ftp.software.ibm.com/software/awdtools/netrexx/NetRexx.zip to /home/nutch/.ant/tempcache/NetRexx.zip

BUILD FAILED
/home/nutch/apache-ant-1.9.7/fetch.xml:328: java.net.ConnectException: Connection timed out
 at java.net.PlainSocketImpl.socketConnect(Native Method)
 at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
 at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
 at java.net.Socket.connect(Socket.java:589)
 at java.net.Socket.connect(Socket.java:538)
 at sun.net.ftp.impl.FtpClient.doConnect(FtpClient.java:957)
 at sun.net.ftp.impl.FtpClient.tryConnect(FtpClient.java:917)
 at sun.net.ftp.impl.FtpClient.connect(FtpClient.java:1012)
 at sun.net.ftp.impl.FtpClient.connect(FtpClient.java:998)
 at sun.net.www.protocol.ftp.FtpURLConnection.connect(FtpURLConnection.java:294)
 at org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:728)
 at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:641)
 at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:631)

Total time: 4 minutes 17 seconds
*** File not found in that location, so use an alternative location here . And modify fetch.xml to bypass download files again in next run.

Manual install Ivy


nutch@45883500b170:~/apache-ant-1.9.7/lib$ cp -rp ~/apache-ivy-2.4.0/ivy-2.4.0.jar .
nutch@45883500b170:~/apache-ant-1.9.7/lib$ 





Sunday, August 28, 2016

Apache HBase - Pseudo-distributed (Part II)


                Pseudo-distributed mode means that HBase still runs completely on a single host, but each HBase daemon (HMaster, HRegionServer, and ZooKeeper) runs as a separate process



Prepare HDFS and permission of directory


hadoop@45883500b170:~/hadoop-2.7.3$ bin/hdfs dfs -mkdir /user/hbase
hadoop@45883500b170:~/hadoop-2.7.3/bin$ ./hdfs dfs -chown -R hbase:hbase /user/hbase
hadoop@45883500b170:~/hadoop-2.7.3/bin$ ./hdfs dfs -ls /user      
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2016-08-28 12:27 /user/hadoop
drwxr-xr-x   - hbase  hbase               0 2016-08-29 06:20 /user/hbase


Re-configuration HBase


hbase-site.xml:

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <!-- value>file:///home/hbase/hbase-1.2.2</value -->
    <value>hdfs://localhost:9000/user/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/hbase/zookeeper</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
</configuration>

hbase-env.sh:
...
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
...


SSH bypass a passphrase


hbase@45883500b170:~/hbase-1.2.2/bin$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /home/hbase/.ssh/id_rsa.
Your public key has been saved in /home/hbase/.ssh/id_rsa.pub.
The key fingerprint is:
57:57:7d:53:3d:54:3e:c5:fe:ef:58:db:1e:07:c1:51 hbase@45883500b170
The key's randomart image is:
+--[ RSA 2048]----+
|              o+E|
|             . **|
|            . +o=|
|           . . .o|
|        S .   . .|
|         .     ..|
|               .+|
|               o*|
|              .++|
+-----------------+
hbase@45883500b170:~/hbase-1.2.2/bin$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
hbase@45883500b170:~/hbase-1.2.2/bin$ chmod 0600 ~/.ssh/authorized_keys



Starting up HBase

hbase@45883500b170:~/hbase-1.2.2/bin$ ./start-hbase.sh 
localhost: starting zookeeper, logging to /home/hbase/hbase-1.2.2/bin/../logs/hbase-hbase-zookeeper-45883500b170.out
starting master, logging to /home/hbase/hbase-1.2.2/bin/../logs/hbase--master-45883500b170.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
starting regionserver, logging to /home/hbase/hbase-1.2.2/bin/../logs/hbase--1-regionserver-45883500b170.out


Whiling start up HBase it creates required files and directory in HDFS as below


Open and Hadoop Web Interface

Testing


hbase@45883500b170:~/hbase-1.2.2/bin$ ./hbase shell
2016-08-29 06:47:10,665 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.2, r3f671c1ead70d249ea4598f1bbcc5151322b3a13, Fri Jul  1 08:28:55 CDT 2016

hbase(main):001:0> create 'test', 'cf'
0 row(s) in 4.8240 seconds

=> Hbase::Table - test
hbase(main):002:0> list 'test'
TABLE                                                                                                                                                     
test                                                                                                                                                      
1 row(s) in 0.0280 seconds

=> ["test"]
hbase(main):003:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.1310 seconds

hbase(main):004:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0200 seconds

hbase(main):005:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0150 seconds

hbase(main):006:0> scan 'test'
ROW                                     COLUMN+CELL                                                                                                       
 row1                                   column=cf:a, timestamp=1472453260190, value=value1                                                                
 row2                                   column=cf:b, timestamp=1472453264719, value=value2                                                                
 row3                                   column=cf:c, timestamp=1472453270502, value=value3                                                                
3 row(s) in 0.0350 seconds

hbase(main):007:0> get 'test', 'row1'
COLUMN                                  CELL                                                                                                              
 cf:a                                   timestamp=1472453260190, value=value1                                                                             
1 row(s) in 0.0350 seconds

hbase(main):008:0> exit


Apache Hadoop - Pseudo-Distributed



root@637c83896b9d:/# sudo apt-get install ssh
root@637c83896b9d:/# sudo apt-get install rsync
root@637c83896b9d:/# useradd hadoop -m -s /bin/bash 
root@637c83896b9d:/# passwd hadoop
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
hadoop@637c83896b9d:/$ update-alternatives --config java
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-oracle/jre/bin/java
Nothing to configure.
hadoop@637c83896b9d:/$ JAVA_HOME=/usr/lib/jvm/java-8-oracle
hadoop@637c83896b9d:/$ export JAVA_HOME
hadoop@637c83896b9d:~$ tail .bashrc 
if ! shopt -oq posix; then
  if [ -f /usr/share/bash-completion/bash_completion ]; then
    . /usr/share/bash-completion/bash_completion
  elif [ -f /etc/bash_completion ]; then
    . /etc/bash_completion
  fi
fi

JAVA_HOME=/usr/lib/jvm/java-8-oracle
export JAVA_HOME
hadoop@637c83896b9d:~$ 


hadoop@637c83896b9d:~$ cd hadoop-2.7.3/
hadoop@637c83896b9d:~/hadoop-2.7.3$ cd bin
hadoop@637c83896b9d:~/hadoop-2.7.3/bin$ 
hadoop@637c83896b9d:~/hadoop-2.7.3/bin$ 
hadoop@637c83896b9d:~/hadoop-2.7.3/bin$ 
hadoop@637c83896b9d:~/hadoop-2.7.3/bin$ 
hadoop@637c83896b9d:~/hadoop-2.7.3/bin$ 
hadoop@637c83896b9d:~/hadoop-2.7.3/bin$         
hadoop@637c83896b9d:~/hadoop-2.7.3/bin$ ./hadoop
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
  CLASSNAME            run the class named CLASSNAME
 or
  where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
                       note: please use "yarn jar" to launch
                             YARN applications, not this command.
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
  credential           interact with credential providers
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings

Most commands print help when invoked w/o parameters.



hadoop@637c83896b9d:~/hadoop-2.7.3$ pwd
/home/hadoop/hadoop-2.7.3
hadoop@637c83896b9d:~/hadoop-2.7.3$ ls
LICENSE.txt  NOTICE.txt  README.txt  bin  etc  include  lib  libexec  sbin  share
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ mkdir input
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ cp etc/hadoop/*.xml input
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'
16/08/28 10:30:37 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
16/08/28 10:30:37 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
16/08/28 10:30:37 INFO input.FileInputFormat: Total input paths to process : 8
16/08/28 10:30:37 INFO mapreduce.JobSubmitter: number of splits:8
16/08/28 10:30:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1669053990_0001
16/08/28 10:30:38 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
16/08/28 10:30:38 INFO mapreduce.Job: Running job: job_local1669053990_0001
16/08/28 10:30:38 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/08/28 10:30:38 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:38 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Waiting for map tasks
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Starting task: attempt_local1669053990_0001_m_000000_0
16/08/28 10:30:38 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:38 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:38 INFO mapred.MapTask: Processing split: file:/home/hadoop/hadoop-2.7.3/input/hadoop-policy.xml:0+9683
16/08/28 10:30:38 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 10:30:38 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 10:30:38 INFO mapred.MapTask: soft limit at 83886080
16/08/28 10:30:38 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 10:30:38 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 10:30:38 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 10:30:38 INFO mapred.LocalJobRunner: 
16/08/28 10:30:38 INFO mapred.MapTask: Starting flush of map output
16/08/28 10:30:38 INFO mapred.MapTask: Spilling map output
16/08/28 10:30:38 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
16/08/28 10:30:38 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
16/08/28 10:30:38 INFO mapred.MapTask: Finished spill 0
16/08/28 10:30:38 INFO mapred.Task: Task:attempt_local1669053990_0001_m_000000_0 is done. And is in the process of committing
16/08/28 10:30:38 INFO mapred.LocalJobRunner: map
16/08/28 10:30:38 INFO mapred.Task: Task 'attempt_local1669053990_0001_m_000000_0' done.
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Finishing task: attempt_local1669053990_0001_m_000000_0
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Starting task: attempt_local1669053990_0001_m_000001_0
16/08/28 10:30:38 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:38 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:38 INFO mapred.MapTask: Processing split: file:/home/hadoop/hadoop-2.7.3/input/kms-site.xml:0+5511
16/08/28 10:30:38 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 10:30:38 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 10:30:38 INFO mapred.MapTask: soft limit at 83886080
16/08/28 10:30:38 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 10:30:38 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 10:30:38 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 10:30:38 INFO mapred.LocalJobRunner: 
16/08/28 10:30:38 INFO mapred.MapTask: Starting flush of map output
16/08/28 10:30:38 INFO mapred.Task: Task:attempt_local1669053990_0001_m_000001_0 is done. And is in the process of committing
16/08/28 10:30:38 INFO mapred.LocalJobRunner: map
16/08/28 10:30:38 INFO mapred.Task: Task 'attempt_local1669053990_0001_m_000001_0' done.
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Finishing task: attempt_local1669053990_0001_m_000001_0
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Starting task: attempt_local1669053990_0001_m_000002_0
16/08/28 10:30:38 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:38 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:38 INFO mapred.MapTask: Processing split: file:/home/hadoop/hadoop-2.7.3/input/capacity-scheduler.xml:0+4436
16/08/28 10:30:38 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 10:30:38 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 10:30:38 INFO mapred.MapTask: soft limit at 83886080
16/08/28 10:30:38 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 10:30:38 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 10:30:38 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 10:30:38 INFO mapred.LocalJobRunner: 
16/08/28 10:30:38 INFO mapred.MapTask: Starting flush of map output
16/08/28 10:30:38 INFO mapred.Task: Task:attempt_local1669053990_0001_m_000002_0 is done. And is in the process of committing
16/08/28 10:30:38 INFO mapred.LocalJobRunner: map
16/08/28 10:30:38 INFO mapred.Task: Task 'attempt_local1669053990_0001_m_000002_0' done.
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Finishing task: attempt_local1669053990_0001_m_000002_0
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Starting task: attempt_local1669053990_0001_m_000003_0
16/08/28 10:30:38 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:38 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:38 INFO mapred.MapTask: Processing split: file:/home/hadoop/hadoop-2.7.3/input/kms-acls.xml:0+3518
16/08/28 10:30:38 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 10:30:38 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 10:30:38 INFO mapred.MapTask: soft limit at 83886080
16/08/28 10:30:38 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 10:30:38 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 10:30:38 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 10:30:38 INFO mapred.LocalJobRunner: 
16/08/28 10:30:38 INFO mapred.MapTask: Starting flush of map output
16/08/28 10:30:38 INFO mapred.Task: Task:attempt_local1669053990_0001_m_000003_0 is done. And is in the process of committing
16/08/28 10:30:38 INFO mapred.LocalJobRunner: map
16/08/28 10:30:38 INFO mapred.Task: Task 'attempt_local1669053990_0001_m_000003_0' done.
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Finishing task: attempt_local1669053990_0001_m_000003_0
16/08/28 10:30:38 INFO mapred.LocalJobRunner: Starting task: attempt_local1669053990_0001_m_000004_0
16/08/28 10:30:38 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:38 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:38 INFO mapred.MapTask: Processing split: file:/home/hadoop/hadoop-2.7.3/input/hdfs-site.xml:0+775
16/08/28 10:30:39 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 10:30:39 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 10:30:39 INFO mapred.MapTask: soft limit at 83886080
16/08/28 10:30:39 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 10:30:39 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 10:30:39 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 
16/08/28 10:30:39 INFO mapred.MapTask: Starting flush of map output
16/08/28 10:30:39 INFO mapred.Task: Task:attempt_local1669053990_0001_m_000004_0 is done. And is in the process of committing
16/08/28 10:30:39 INFO mapred.LocalJobRunner: map
16/08/28 10:30:39 INFO mapred.Task: Task 'attempt_local1669053990_0001_m_000004_0' done.
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Finishing task: attempt_local1669053990_0001_m_000004_0
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Starting task: attempt_local1669053990_0001_m_000005_0
16/08/28 10:30:39 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:39 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:39 INFO mapred.MapTask: Processing split: file:/home/hadoop/hadoop-2.7.3/input/core-site.xml:0+774
16/08/28 10:30:39 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 10:30:39 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 10:30:39 INFO mapred.MapTask: soft limit at 83886080
16/08/28 10:30:39 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 10:30:39 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 10:30:39 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 
16/08/28 10:30:39 INFO mapred.MapTask: Starting flush of map output
16/08/28 10:30:39 INFO mapred.Task: Task:attempt_local1669053990_0001_m_000005_0 is done. And is in the process of committing
16/08/28 10:30:39 INFO mapred.LocalJobRunner: map
16/08/28 10:30:39 INFO mapred.Task: Task 'attempt_local1669053990_0001_m_000005_0' done.
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Finishing task: attempt_local1669053990_0001_m_000005_0
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Starting task: attempt_local1669053990_0001_m_000006_0
16/08/28 10:30:39 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:39 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:39 INFO mapred.MapTask: Processing split: file:/home/hadoop/hadoop-2.7.3/input/yarn-site.xml:0+690
16/08/28 10:30:39 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 10:30:39 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 10:30:39 INFO mapred.MapTask: soft limit at 83886080
16/08/28 10:30:39 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 10:30:39 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 10:30:39 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 
16/08/28 10:30:39 INFO mapred.MapTask: Starting flush of map output
16/08/28 10:30:39 INFO mapred.Task: Task:attempt_local1669053990_0001_m_000006_0 is done. And is in the process of committing
16/08/28 10:30:39 INFO mapred.LocalJobRunner: map
16/08/28 10:30:39 INFO mapred.Task: Task 'attempt_local1669053990_0001_m_000006_0' done.
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Finishing task: attempt_local1669053990_0001_m_000006_0
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Starting task: attempt_local1669053990_0001_m_000007_0
16/08/28 10:30:39 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:39 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:39 INFO mapred.MapTask: Processing split: file:/home/hadoop/hadoop-2.7.3/input/httpfs-site.xml:0+620
16/08/28 10:30:39 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 10:30:39 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 10:30:39 INFO mapred.MapTask: soft limit at 83886080
16/08/28 10:30:39 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 10:30:39 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 10:30:39 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 
16/08/28 10:30:39 INFO mapred.MapTask: Starting flush of map output
16/08/28 10:30:39 INFO mapred.Task: Task:attempt_local1669053990_0001_m_000007_0 is done. And is in the process of committing
16/08/28 10:30:39 INFO mapred.LocalJobRunner: map
16/08/28 10:30:39 INFO mapred.Task: Task 'attempt_local1669053990_0001_m_000007_0' done.
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Finishing task: attempt_local1669053990_0001_m_000007_0
16/08/28 10:30:39 INFO mapred.LocalJobRunner: map task executor complete.
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Waiting for reduce tasks
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Starting task: attempt_local1669053990_0001_r_000000_0
16/08/28 10:30:39 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:39 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:39 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@4acf1a45
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=334338464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10, memToMemMergeOutputsThreshold=10
16/08/28 10:30:39 INFO reduce.EventFetcher: attempt_local1669053990_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
16/08/28 10:30:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1669053990_0001_m_000007_0 decomp: 2 len: 6 to MEMORY
16/08/28 10:30:39 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1669053990_0001_m_000007_0
16/08/28 10:30:39 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
 at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->2
16/08/28 10:30:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1669053990_0001_m_000004_0 decomp: 2 len: 6 to MEMORY
16/08/28 10:30:39 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1669053990_0001_m_000004_0
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 2, commitMemory -> 2, usedMemory ->4
16/08/28 10:30:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1669053990_0001_m_000001_0 decomp: 2 len: 6 to MEMORY
16/08/28 10:30:39 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1669053990_0001_m_000001_0
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 3, commitMemory -> 4, usedMemory ->6
16/08/28 10:30:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1669053990_0001_m_000005_0 decomp: 2 len: 6 to MEMORY
16/08/28 10:30:39 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1669053990_0001_m_000005_0
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 4, commitMemory -> 6, usedMemory ->8
16/08/28 10:30:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1669053990_0001_m_000002_0 decomp: 2 len: 6 to MEMORY
16/08/28 10:30:39 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1669053990_0001_m_000002_0
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 5, commitMemory -> 8, usedMemory ->10
16/08/28 10:30:39 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
 at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
16/08/28 10:30:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1669053990_0001_m_000006_0 decomp: 2 len: 6 to MEMORY
16/08/28 10:30:39 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1669053990_0001_m_000006_0
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 6, commitMemory -> 10, usedMemory ->12
16/08/28 10:30:39 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
 at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
16/08/28 10:30:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1669053990_0001_m_000003_0 decomp: 2 len: 6 to MEMORY
16/08/28 10:30:39 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1669053990_0001_m_000003_0
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 7, commitMemory -> 12, usedMemory ->14
16/08/28 10:30:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1669053990_0001_m_000000_0 decomp: 21 len: 25 to MEMORY
16/08/28 10:30:39 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local1669053990_0001_m_000000_0
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 8, commitMemory -> 14, usedMemory ->35
16/08/28 10:30:39 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 8 / 8 copied.
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: finalMerge called with 8 in-memory map-outputs and 0 on-disk map-outputs
16/08/28 10:30:39 INFO mapred.Merger: Merging 8 sorted segments
16/08/28 10:30:39 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: Merged 8 segments, 35 bytes to disk to satisfy reduce memory limit
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: Merging 1 files, 25 bytes from disk
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
16/08/28 10:30:39 INFO mapred.Merger: Merging 1 sorted segments
16/08/28 10:30:39 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 8 / 8 copied.
16/08/28 10:30:39 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
16/08/28 10:30:39 INFO mapred.Task: Task:attempt_local1669053990_0001_r_000000_0 is done. And is in the process of committing
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 8 / 8 copied.
16/08/28 10:30:39 INFO mapred.Task: Task attempt_local1669053990_0001_r_000000_0 is allowed to commit now
16/08/28 10:30:39 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1669053990_0001_r_000000_0' to file:/home/hadoop/hadoop-2.7.3/grep-temp-134950757/_temporary/0/task_local1669053990_0001_r_000000
16/08/28 10:30:39 INFO mapred.LocalJobRunner: reduce > reduce
16/08/28 10:30:39 INFO mapred.Task: Task 'attempt_local1669053990_0001_r_000000_0' done.
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Finishing task: attempt_local1669053990_0001_r_000000_0
16/08/28 10:30:39 INFO mapred.LocalJobRunner: reduce task executor complete.
16/08/28 10:30:39 INFO mapreduce.Job: Job job_local1669053990_0001 running in uber mode : false
16/08/28 10:30:39 INFO mapreduce.Job:  map 100% reduce 100%
16/08/28 10:30:39 INFO mapreduce.Job: Job job_local1669053990_0001 completed successfully
16/08/28 10:30:39 INFO mapreduce.Job: Counters: 30
 File System Counters
  FILE: Number of bytes read=2895165
  FILE: Number of bytes written=5259782
  FILE: Number of read operations=0
  FILE: Number of large read operations=0
  FILE: Number of write operations=0
 Map-Reduce Framework
  Map input records=745
  Map output records=1
  Map output bytes=17
  Map output materialized bytes=67
  Input split bytes=933
  Combine input records=1
  Combine output records=1
  Reduce input groups=1
  Reduce shuffle bytes=67
  Reduce input records=1
  Reduce output records=1
  Spilled Records=2
  Shuffled Maps =8
  Failed Shuffles=0
  Merged Map outputs=8
  GC time elapsed (ms)=85
  Total committed heap usage (bytes)=2771386368
 Shuffle Errors
  BAD_ID=0
  CONNECTION=0
  IO_ERROR=0
  WRONG_LENGTH=0
  WRONG_MAP=0
  WRONG_REDUCE=0
 File Input Format Counters 
  Bytes Read=26007
 File Output Format Counters 
  Bytes Written=123
16/08/28 10:30:39 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
16/08/28 10:30:39 INFO input.FileInputFormat: Total input paths to process : 1
16/08/28 10:30:39 INFO mapreduce.JobSubmitter: number of splits:1
16/08/28 10:30:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local2056692232_0002
16/08/28 10:30:39 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
16/08/28 10:30:39 INFO mapreduce.Job: Running job: job_local2056692232_0002
16/08/28 10:30:39 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/08/28 10:30:39 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:39 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Waiting for map tasks
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Starting task: attempt_local2056692232_0002_m_000000_0
16/08/28 10:30:39 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:39 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:39 INFO mapred.MapTask: Processing split: file:/home/hadoop/hadoop-2.7.3/grep-temp-134950757/part-r-00000:0+111
16/08/28 10:30:39 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 10:30:39 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 10:30:39 INFO mapred.MapTask: soft limit at 83886080
16/08/28 10:30:39 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 10:30:39 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 10:30:39 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 
16/08/28 10:30:39 INFO mapred.MapTask: Starting flush of map output
16/08/28 10:30:39 INFO mapred.MapTask: Spilling map output
16/08/28 10:30:39 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
16/08/28 10:30:39 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
16/08/28 10:30:39 INFO mapred.MapTask: Finished spill 0
16/08/28 10:30:39 INFO mapred.Task: Task:attempt_local2056692232_0002_m_000000_0 is done. And is in the process of committing
16/08/28 10:30:39 INFO mapred.LocalJobRunner: map
16/08/28 10:30:39 INFO mapred.Task: Task 'attempt_local2056692232_0002_m_000000_0' done.
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Finishing task: attempt_local2056692232_0002_m_000000_0
16/08/28 10:30:39 INFO mapred.LocalJobRunner: map task executor complete.
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Waiting for reduce tasks
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Starting task: attempt_local2056692232_0002_r_000000_0
16/08/28 10:30:39 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 10:30:39 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 10:30:39 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@3f72041c
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=334338464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10, memToMemMergeOutputsThreshold=10
16/08/28 10:30:39 INFO reduce.EventFetcher: attempt_local2056692232_0002_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
16/08/28 10:30:39 INFO reduce.LocalFetcher: localfetcher#2 about to shuffle output of map attempt_local2056692232_0002_m_000000_0 decomp: 21 len: 25 to MEMORY
16/08/28 10:30:39 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local2056692232_0002_m_000000_0
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->21
16/08/28 10:30:39 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 1 / 1 copied.
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
16/08/28 10:30:39 INFO mapred.Merger: Merging 1 sorted segments
16/08/28 10:30:39 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 11 bytes
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: Merged 1 segments, 21 bytes to disk to satisfy reduce memory limit
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: Merging 1 files, 25 bytes from disk
16/08/28 10:30:39 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
16/08/28 10:30:39 INFO mapred.Merger: Merging 1 sorted segments
16/08/28 10:30:39 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 11 bytes
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 1 / 1 copied.
16/08/28 10:30:39 INFO mapred.Task: Task:attempt_local2056692232_0002_r_000000_0 is done. And is in the process of committing
16/08/28 10:30:39 INFO mapred.LocalJobRunner: 1 / 1 copied.
16/08/28 10:30:39 INFO mapred.Task: Task attempt_local2056692232_0002_r_000000_0 is allowed to commit now
16/08/28 10:30:39 INFO output.FileOutputCommitter: Saved output of task 'attempt_local2056692232_0002_r_000000_0' to file:/home/hadoop/hadoop-2.7.3/output/_temporary/0/task_local2056692232_0002_r_000000
16/08/28 10:30:39 INFO mapred.LocalJobRunner: reduce > reduce
16/08/28 10:30:39 INFO mapred.Task: Task 'attempt_local2056692232_0002_r_000000_0' done.
16/08/28 10:30:39 INFO mapred.LocalJobRunner: Finishing task: attempt_local2056692232_0002_r_000000_0
16/08/28 10:30:39 INFO mapred.LocalJobRunner: reduce task executor complete.
16/08/28 10:30:40 INFO mapreduce.Job: Job job_local2056692232_0002 running in uber mode : false
16/08/28 10:30:40 INFO mapreduce.Job:  map 100% reduce 100%
16/08/28 10:30:40 INFO mapreduce.Job: Job job_local2056692232_0002 completed successfully
16/08/28 10:30:40 INFO mapreduce.Job: Counters: 30
 File System Counters
  FILE: Number of bytes read=1249180
  FILE: Number of bytes written=2333622
  FILE: Number of read operations=0
  FILE: Number of large read operations=0
  FILE: Number of write operations=0
 Map-Reduce Framework
  Map input records=1
  Map output records=1
  Map output bytes=17
  Map output materialized bytes=25
  Input split bytes=128
  Combine input records=0
  Combine output records=0
  Reduce input groups=1
  Reduce shuffle bytes=25
  Reduce input records=1
  Reduce output records=1
  Spilled Records=2
  Shuffled Maps =1
  Failed Shuffles=0
  Merged Map outputs=1
  GC time elapsed (ms)=0
  Total committed heap usage (bytes)=896532480
 Shuffle Errors
  BAD_ID=0
  CONNECTION=0
  IO_ERROR=0
  WRONG_LENGTH=0
  WRONG_MAP=0
  WRONG_REDUCE=0
 File Input Format Counters 
  Bytes Read=123
 File Output Format Counters 
  Bytes Written=23
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ ls share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar
share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar
hadoop@637c83896b9d:~/hadoop-2.7.3$ cat output/*
1 dfsadmin
* Orange text highlight mean you close a file while a readahead request is in flight. (thanks this forum)


Alternatively, test with unix grep command


hadoop@637c83896b9d:~/hadoop-2.7.3/input$ ls
capacity-scheduler.xml  core-site.xml  hadoop-policy.xml  hdfs-site.xml  httpfs-site.xml  kms-acls.xml  kms-site.xml  yarn-site.xml
hadoop@637c83896b9d:~/hadoop-2.7.3/input$ ls -lt
total 48
-rw-r--r-- 1 hadoop hadoop 4436 Aug 28 10:30 capacity-scheduler.xml
-rw-r--r-- 1 hadoop hadoop  774 Aug 28 10:30 core-site.xml
-rw-r--r-- 1 hadoop hadoop 9683 Aug 28 10:30 hadoop-policy.xml
-rw-r--r-- 1 hadoop hadoop  775 Aug 28 10:30 hdfs-site.xml
-rw-r--r-- 1 hadoop hadoop  620 Aug 28 10:30 httpfs-site.xml
-rw-r--r-- 1 hadoop hadoop 3518 Aug 28 10:30 kms-acls.xml
-rw-r--r-- 1 hadoop hadoop 5511 Aug 28 10:30 kms-site.xml
-rw-r--r-- 1 hadoop hadoop  690 Aug 28 10:30 yarn-site.xml
hadoop@637c83896b9d:~/hadoop-2.7.3/input$ grep dfs *.xml
hadoop-policy.xml:    dfsadmin and mradmin commands to refresh the security policy in-effect.
hadoop@637c83896b9d:~/hadoop-2.7.3/input$ cd ..
hadoop@637c83896b9d:~/hadoop-2.7.3$ cd output
hadoop@637c83896b9d:~/hadoop-2.7.3/output$ ls -l
total 4
-rw-r--r-- 1 hadoop hadoop  0 Aug 28 10:30 _SUCCESS
-rw-r--r-- 1 hadoop hadoop 11 Aug 28 10:30 part-r-00000
hadoop@637c83896b9d:~/hadoop-2.7.3/output$ cat *
1 dfsadmin


hadoop@637c83896b9d:~/hadoop-2.7.3$ sudo service ssh start
[sudo] password for hadoop: 
hadoop is not in the sudoers file.  This incident will be reported.
hadoop@637c83896b9d:~/hadoop-2.7.3$ exit
exit
ubuntu@node2:~$ docker exec -it  hbase bash
root@637c83896b9d:/# service ssh start
 * Starting OpenBSD Secure Shell server sshd                                                                                                       [ OK ] 
root@637c83896b9d:/# exit
exit
ubuntu@node2:~$ docker exec -it --user hadoop hbase bash
hadoop@637c83896b9d:/$ 
hadoop@637c83896b9d:/$ ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is 04:0d:98:59:94:bc:94:83:1a:de:3d:ae:3d:9b:a0:20.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
hadoop@localhost's password: 
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-92-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

hadoop@637c83896b9d:~$ exit
logout
Connection to localhost closed.
hadoop@637c83896b9d:/$ 
hadoop@637c83896b9d:/$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
b8:b5:59:7c:b5:1f:f8:af:ab:a3:74:d0:82:bf:a0:67 hadoop@637c83896b9d
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|              .  |
|       . o . ... |
|      . S = o... |
|       o = +  ...|
|      . + o .  ..|
|       .Eo o.   .|
|      .o  o..ooo.|
+-----------------+
hadoop@637c83896b9d:/$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
hadoop@637c83896b9d:/$ chmod 0600 ~/.ssh/authorized_keys
hadoop@637c83896b9d:/$ 
hadoop@637c83896b9d:/$ ssh localhost
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-85-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
Last login: Sun Aug 28 11:03:32 2016 from localhost


Run a MapReduce job locally

1) Format the file system


hadoop@637c83896b9d:~/hadoop-2.7.3$ bin/hdfs namenode -format
16/08/28 11:09:29 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hbase/172.29.5.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.3
STARTUP_MSG:   classpath = /home/hadoop/hadoop-2.7.3/etc/hadoop:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/home/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG:   java = 1.8.0_101
************************************************************/
16/08/28 11:09:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/08/28 11:09:29 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-83a59c31-a355-4323-9334-d14989b04a59
16/08/28 11:09:29 INFO namenode.FSNamesystem: No KeyProvider found.
16/08/28 11:09:29 INFO namenode.FSNamesystem: fsLock is fair:true
16/08/28 11:09:29 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/08/28 11:09:29 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/08/28 11:09:29 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/08/28 11:09:29 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Aug 28 11:09:29
16/08/28 11:09:29 INFO util.GSet: Computing capacity for map BlocksMap
16/08/28 11:09:29 INFO util.GSet: VM type       = 64-bit
16/08/28 11:09:29 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/08/28 11:09:29 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/08/28 11:09:29 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/08/28 11:09:29 INFO blockmanagement.BlockManager: defaultReplication         = 1
16/08/28 11:09:29 INFO blockmanagement.BlockManager: maxReplication             = 512
16/08/28 11:09:29 INFO blockmanagement.BlockManager: minReplication             = 1
16/08/28 11:09:29 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/08/28 11:09:29 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/08/28 11:09:29 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/08/28 11:09:29 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/08/28 11:09:29 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
16/08/28 11:09:29 INFO namenode.FSNamesystem: supergroup          = supergroup
16/08/28 11:09:29 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/08/28 11:09:29 INFO namenode.FSNamesystem: HA Enabled: false
16/08/28 11:09:29 INFO namenode.FSNamesystem: Append Enabled: true
16/08/28 11:09:29 INFO util.GSet: Computing capacity for map INodeMap
16/08/28 11:09:29 INFO util.GSet: VM type       = 64-bit
16/08/28 11:09:29 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/08/28 11:09:29 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/08/28 11:09:29 INFO namenode.FSDirectory: ACLs enabled? false
16/08/28 11:09:29 INFO namenode.FSDirectory: XAttrs enabled? true
16/08/28 11:09:29 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/08/28 11:09:29 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/08/28 11:09:29 INFO util.GSet: Computing capacity for map cachedBlocks
16/08/28 11:09:29 INFO util.GSet: VM type       = 64-bit
16/08/28 11:09:29 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/08/28 11:09:29 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/08/28 11:09:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/08/28 11:09:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/08/28 11:09:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/08/28 11:09:29 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/08/28 11:09:29 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/08/28 11:09:29 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/08/28 11:09:29 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/08/28 11:09:29 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/08/28 11:09:29 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/08/28 11:09:29 INFO util.GSet: VM type       = 64-bit
16/08/28 11:09:29 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/08/28 11:09:29 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/08/28 11:09:29 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1423794091-172.29.5.1-1472382569816
16/08/28 11:09:30 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
16/08/28 11:09:30 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
16/08/28 11:09:30 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 353 bytes saved in 0 seconds.
16/08/28 11:09:30 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/08/28 11:09:30 INFO util.ExitUtil: Exiting with status 0
16/08/28 11:09:30 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hbase/172.29.5.1
************************************************************/
hadoop@637c83896b9d:~/hadoop-2.7.3$ 


2) Start NameNode daemon and DataNode daemon

Attention : Must set JAVA_HOME in hadoop env files to avoid startuo problems.


hadoop@637c83896b9d:~/hadoop-2.7.3$ grep JAVA_HOME etc/hadoop/hadoop-env.sh
# The only required environment variable is JAVA_HOME.  All others are
# set JAVA_HOME in this file, so that it is correctly defined on
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
hadoop@637c83896b9d:~/hadoop-2.7.3$ sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-637c83896b9d.out
localhost: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-637c83896b9d.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-637c83896b9d.out
hadoop@637c83896b9d:~/hadoop-2.7.3$ 

Can browse to NameNode - http://localhost:50070/


hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ curl http://localhost:50070/
<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="REFRESH" content="0;url=dfshealth.html" />
<title>Hadoop Administration</title>
</head>
</html>
hadoop@637c83896b9d:~/hadoop-2.7.3$ 


hadoop@637c83896b9d:~/hadoop-2.7.3$ bin/hdfs dfs -mkdir /user
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ bin/hdfs dfs -mkdir /user/hadoop
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ bin/hdfs dfs -put etc/hadoop input
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'
16/08/28 12:27:06 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
16/08/28 12:27:06 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
16/08/28 12:27:06 INFO input.FileInputFormat: Total input paths to process : 29
16/08/28 12:27:06 INFO mapreduce.JobSubmitter: number of splits:29
16/08/28 12:27:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1735263799_0001
16/08/28 12:27:07 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
16/08/28 12:27:07 INFO mapreduce.Job: Running job: job_local1735263799_0001
16/08/28 12:27:07 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Waiting for map tasks
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000000_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/log4j.properties:0+11237
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.MapTask: Spilling map output
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufend = 279; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214364(104857456); length = 33/6553600
16/08/28 12:27:07 INFO mapred.MapTask: Finished spill 0
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000000_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000000_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000000_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000001_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/hadoop-policy.xml:0+9683
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.MapTask: Spilling map output
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
16/08/28 12:27:07 INFO mapred.MapTask: Finished spill 0
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000001_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000001_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000001_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000002_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/kms-site.xml:0+5511
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000002_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000002_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000002_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000003_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/yarn-env.sh:0+4567
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000003_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000003_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000003_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000004_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/capacity-scheduler.xml:0+4436
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000004_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000004_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000004_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000005_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/hadoop-env.sh:0+4269
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.MapTask: Spilling map output
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufend = 50; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
16/08/28 12:27:07 INFO mapred.MapTask: Finished spill 0
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000005_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000005_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000005_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000006_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/mapred-queues.xml.template:0+4113
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000006_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000006_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000006_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000007_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/hadoop-env.cmd:0+3589
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.MapTask: Spilling map output
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufend = 50; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
16/08/28 12:27:07 INFO mapred.MapTask: Finished spill 0
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000007_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000007_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000007_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000008_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/kms-acls.xml:0+3518
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000008_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000008_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000008_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000009_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/hadoop-metrics2.properties:0+2598
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000009_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000009_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000009_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000010_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/hadoop-metrics.properties:0+2490
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.MapTask: Spilling map output
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufend = 170; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214364(104857456); length = 33/6553600
16/08/28 12:27:07 INFO mapred.MapTask: Finished spill 0
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000010_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000010_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000010_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000011_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/ssl-client.xml.example:0+2316
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000011_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000011_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000011_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000012_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/ssl-server.xml.example:0+2268
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000012_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000012_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000012_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000013_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/yarn-env.cmd:0+2191
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000013_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000013_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000013_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000014_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/httpfs-log4j.properties:0+1657
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000014_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000014_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000014_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000015_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/kms-log4j.properties:0+1631
16/08/28 12:27:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:07 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:07 INFO mapred.LocalJobRunner: 
16/08/28 12:27:07 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:07 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000015_0 is done. And is in the process of committing
16/08/28 12:27:07 INFO mapred.LocalJobRunner: map
16/08/28 12:27:07 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000015_0' done.
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000015_0
16/08/28 12:27:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000016_0
16/08/28 12:27:07 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:07 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/kms-env.sh:0+1527
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000016_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000016_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000016_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000017_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/httpfs-env.sh:0+1449
16/08/28 12:27:08 INFO mapreduce.Job: Job job_local1735263799_0001 running in uber mode : false
16/08/28 12:27:08 INFO mapreduce.Job:  map 100% reduce 0%
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000017_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000017_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000017_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000018_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/mapred-env.sh:0+1383
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000018_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000018_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000018_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000019_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/configuration.xsl:0+1335
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000019_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000019_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000019_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000020_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/mapred-env.cmd:0+931
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000020_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000020_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000020_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000021_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/core-site.xml:0+880
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000021_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000021_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000021_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000022_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/hdfs-site.xml:0+863
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.MapTask: Spilling map output
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufend = 24; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
16/08/28 12:27:08 INFO mapred.MapTask: Finished spill 0
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000022_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000022_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000022_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000023_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/mapred-site.xml.template:0+758
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000023_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000023_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000023_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000024_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/yarn-site.xml:0+690
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000024_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000024_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000024_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000025_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/httpfs-site.xml:0+620
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000025_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000025_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000025_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000026_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/container-executor.cfg:0+318
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000026_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000026_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000026_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000027_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/httpfs-signature.secret:0+21
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000027_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000027_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000027_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_m_000028_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/slaves:0+10
16/08/28 12:27:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:08 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 
16/08/28 12:27:08 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_m_000028_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map
16/08/28 12:27:08 INFO mapred.Task: Task 'attempt_local1735263799_0001_m_000028_0' done.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_m_000028_0
16/08/28 12:27:08 INFO mapred.LocalJobRunner: map task executor complete.
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Waiting for reduce tasks
16/08/28 12:27:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1735263799_0001_r_000000_0
16/08/28 12:27:08 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:08 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@3be15b04
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=371038624, maxSingleShuffleLimit=92759656, mergeThreshold=244885504, ioSortFactor=10, memToMemMergeOutputsThreshold=10
16/08/28 12:27:08 INFO reduce.EventFetcher: attempt_local1735263799_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000016_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000016_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->2
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000003_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
 at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000003_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 2, commitMemory -> 2, usedMemory ->4
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000004_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000004_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 3, commitMemory -> 4, usedMemory ->6
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000017_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
 at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000017_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 4, commitMemory -> 6, usedMemory ->8
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000027_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000027_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 5, commitMemory -> 8, usedMemory ->10
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000002_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000002_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 6, commitMemory -> 10, usedMemory ->12
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000028_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000028_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 7, commitMemory -> 12, usedMemory ->14
16/08/28 12:27:08 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
 at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000015_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000015_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 8, commitMemory -> 14, usedMemory ->16
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000019_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000019_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 9, commitMemory -> 16, usedMemory ->18
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000020_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
 at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000020_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 10, commitMemory -> 18, usedMemory ->20
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000007_0 decomp: 29 len: 33 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 29 bytes from map-output for attempt_local1735263799_0001_m_000007_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 29, inMemoryMapOutputs.size() -> 11, commitMemory -> 20, usedMemory ->49
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000018_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000018_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 12, commitMemory -> 49, usedMemory ->51
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000005_0 decomp: 29 len: 33 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 29 bytes from map-output for attempt_local1735263799_0001_m_000005_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 29, inMemoryMapOutputs.size() -> 13, commitMemory -> 51, usedMemory ->80
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000006_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000006_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 14, commitMemory -> 80, usedMemory ->82
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000010_0 decomp: 109 len: 113 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 109 bytes from map-output for attempt_local1735263799_0001_m_000010_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 109, inMemoryMapOutputs.size() -> 15, commitMemory -> 82, usedMemory ->191
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000023_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000023_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 16, commitMemory -> 191, usedMemory ->193
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000008_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000008_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 17, commitMemory -> 193, usedMemory ->195
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000021_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000021_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 18, commitMemory -> 195, usedMemory ->197
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000022_0 decomp: 28 len: 32 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 28 bytes from map-output for attempt_local1735263799_0001_m_000022_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 28, inMemoryMapOutputs.size() -> 19, commitMemory -> 197, usedMemory ->225
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000009_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000009_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 20, commitMemory -> 225, usedMemory ->227
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000000_0 decomp: 135 len: 139 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 135 bytes from map-output for attempt_local1735263799_0001_m_000000_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 135, inMemoryMapOutputs.size() -> 21, commitMemory -> 227, usedMemory ->362
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000026_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000026_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 22, commitMemory -> 362, usedMemory ->364
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000013_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000013_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 23, commitMemory -> 364, usedMemory ->366
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000014_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000014_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 24, commitMemory -> 366, usedMemory ->368
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000001_0 decomp: 21 len: 25 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local1735263799_0001_m_000001_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 25, commitMemory -> 368, usedMemory ->389
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000024_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000024_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 26, commitMemory -> 389, usedMemory ->391
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000011_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000011_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 27, commitMemory -> 391, usedMemory ->393
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000012_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000012_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 28, commitMemory -> 393, usedMemory ->395
16/08/28 12:27:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1735263799_0001_m_000025_0 decomp: 2 len: 6 to MEMORY
16/08/28 12:27:08 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1735263799_0001_m_000025_0
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 29, commitMemory -> 395, usedMemory ->397
16/08/28 12:27:08 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 29 / 29 copied.
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: finalMerge called with 29 in-memory map-outputs and 0 on-disk map-outputs
16/08/28 12:27:08 INFO mapred.Merger: Merging 29 sorted segments
16/08/28 12:27:08 INFO mapred.Merger: Down to the last merge-pass, with 6 segments left of total size: 241 bytes
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: Merged 29 segments, 397 bytes to disk to satisfy reduce memory limit
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: Merging 1 files, 345 bytes from disk
16/08/28 12:27:08 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
16/08/28 12:27:08 INFO mapred.Merger: Merging 1 sorted segments
16/08/28 12:27:08 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 310 bytes
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 29 / 29 copied.
16/08/28 12:27:08 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
16/08/28 12:27:08 INFO mapred.Task: Task:attempt_local1735263799_0001_r_000000_0 is done. And is in the process of committing
16/08/28 12:27:08 INFO mapred.LocalJobRunner: 29 / 29 copied.
16/08/28 12:27:08 INFO mapred.Task: Task attempt_local1735263799_0001_r_000000_0 is allowed to commit now
16/08/28 12:27:09 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1735263799_0001_r_000000_0' to hdfs://localhost:9000/user/hadoop/grep-temp-1779120689/_temporary/0/task_local1735263799_0001_r_000000
16/08/28 12:27:09 INFO mapred.LocalJobRunner: reduce > reduce
16/08/28 12:27:09 INFO mapred.Task: Task 'attempt_local1735263799_0001_r_000000_0' done.
16/08/28 12:27:09 INFO mapred.LocalJobRunner: Finishing task: attempt_local1735263799_0001_r_000000_0
16/08/28 12:27:09 INFO mapred.LocalJobRunner: reduce task executor complete.
16/08/28 12:27:09 INFO mapreduce.Job:  map 100% reduce 100%
16/08/28 12:27:10 INFO mapreduce.Job: Job job_local1735263799_0001 completed successfully
16/08/28 12:27:10 INFO mapreduce.Job: Counters: 35
 File System Counters
  FILE: Number of bytes read=10112916
  FILE: Number of bytes written=17653982
  FILE: Number of read operations=0
  FILE: Number of large read operations=0
  FILE: Number of write operations=0
  HDFS: Number of bytes read=1763191
  HDFS: Number of bytes written=437
  HDFS: Number of read operations=1021
  HDFS: Number of large read operations=0
  HDFS: Number of write operations=32
 Map-Reduce Framework
  Map input records=2069
  Map output records=24
  Map output bytes=590
  Map output materialized bytes=513
  Input split bytes=3534
  Combine input records=24
  Combine output records=13
  Reduce input groups=11
  Reduce shuffle bytes=513
  Reduce input records=13
  Reduce output records=11
  Spilled Records=26
  Shuffled Maps =29
  Failed Shuffles=0
  Merged Map outputs=29
  GC time elapsed (ms)=110
  Total committed heap usage (bytes)=15204876288
 Shuffle Errors
  BAD_ID=0
  CONNECTION=0
  IO_ERROR=0
  WRONG_LENGTH=0
  WRONG_MAP=0
  WRONG_REDUCE=0
 File Input Format Counters 
  Bytes Read=76859
 File Output Format Counters 
  Bytes Written=437
16/08/28 12:27:10 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
16/08/28 12:27:10 INFO input.FileInputFormat: Total input paths to process : 1
16/08/28 12:27:10 INFO mapreduce.JobSubmitter: number of splits:1
16/08/28 12:27:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1915471944_0002
16/08/28 12:27:10 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
16/08/28 12:27:10 INFO mapreduce.Job: Running job: job_local1915471944_0002
16/08/28 12:27:10 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/08/28 12:27:10 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:10 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
16/08/28 12:27:10 INFO mapred.LocalJobRunner: Waiting for map tasks
16/08/28 12:27:10 INFO mapred.LocalJobRunner: Starting task: attempt_local1915471944_0002_m_000000_0
16/08/28 12:27:10 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:10 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:10 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/grep-temp-1779120689/part-r-00000:0+437
16/08/28 12:27:10 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/08/28 12:27:10 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/08/28 12:27:10 INFO mapred.MapTask: soft limit at 83886080
16/08/28 12:27:10 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/08/28 12:27:10 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/08/28 12:27:10 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/08/28 12:27:10 INFO mapred.LocalJobRunner: 
16/08/28 12:27:10 INFO mapred.MapTask: Starting flush of map output
16/08/28 12:27:10 INFO mapred.MapTask: Spilling map output
16/08/28 12:27:10 INFO mapred.MapTask: bufstart = 0; bufend = 263; bufvoid = 104857600
16/08/28 12:27:10 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600
16/08/28 12:27:10 INFO mapred.MapTask: Finished spill 0
16/08/28 12:27:10 INFO mapred.Task: Task:attempt_local1915471944_0002_m_000000_0 is done. And is in the process of committing
16/08/28 12:27:10 INFO mapred.LocalJobRunner: map
16/08/28 12:27:10 INFO mapred.Task: Task 'attempt_local1915471944_0002_m_000000_0' done.
16/08/28 12:27:10 INFO mapred.LocalJobRunner: Finishing task: attempt_local1915471944_0002_m_000000_0
16/08/28 12:27:10 INFO mapred.LocalJobRunner: map task executor complete.
16/08/28 12:27:10 INFO mapred.LocalJobRunner: Waiting for reduce tasks
16/08/28 12:27:10 INFO mapred.LocalJobRunner: Starting task: attempt_local1915471944_0002_r_000000_0
16/08/28 12:27:10 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
16/08/28 12:27:10 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/08/28 12:27:10 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@7efca19d
16/08/28 12:27:10 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=371405600, maxSingleShuffleLimit=92851400, mergeThreshold=245127712, ioSortFactor=10, memToMemMergeOutputsThreshold=10
16/08/28 12:27:10 INFO reduce.EventFetcher: attempt_local1915471944_0002_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
16/08/28 12:27:10 INFO reduce.LocalFetcher: localfetcher#2 about to shuffle output of map attempt_local1915471944_0002_m_000000_0 decomp: 287 len: 291 to MEMORY
16/08/28 12:27:10 INFO reduce.InMemoryMapOutput: Read 287 bytes from map-output for attempt_local1915471944_0002_m_000000_0
16/08/28 12:27:10 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 287, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->287
16/08/28 12:27:10 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
16/08/28 12:27:10 INFO mapred.LocalJobRunner: 1 / 1 copied.
16/08/28 12:27:10 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
16/08/28 12:27:10 INFO mapred.Merger: Merging 1 sorted segments
16/08/28 12:27:10 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 277 bytes
16/08/28 12:27:10 INFO reduce.MergeManagerImpl: Merged 1 segments, 287 bytes to disk to satisfy reduce memory limit
16/08/28 12:27:10 INFO reduce.MergeManagerImpl: Merging 1 files, 291 bytes from disk
16/08/28 12:27:10 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
16/08/28 12:27:10 INFO mapred.Merger: Merging 1 sorted segments
16/08/28 12:27:10 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 277 bytes
16/08/28 12:27:10 INFO mapred.LocalJobRunner: 1 / 1 copied.
16/08/28 12:27:10 INFO mapred.Task: Task:attempt_local1915471944_0002_r_000000_0 is done. And is in the process of committing
16/08/28 12:27:10 INFO mapred.LocalJobRunner: 1 / 1 copied.
16/08/28 12:27:10 INFO mapred.Task: Task attempt_local1915471944_0002_r_000000_0 is allowed to commit now
16/08/28 12:27:10 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1915471944_0002_r_000000_0' to hdfs://localhost:9000/user/hadoop/output/_temporary/0/task_local1915471944_0002_r_000000
16/08/28 12:27:10 INFO mapred.LocalJobRunner: reduce > reduce
16/08/28 12:27:10 INFO mapred.Task: Task 'attempt_local1915471944_0002_r_000000_0' done.
16/08/28 12:27:10 INFO mapred.LocalJobRunner: Finishing task: attempt_local1915471944_0002_r_000000_0
16/08/28 12:27:10 INFO mapred.LocalJobRunner: reduce task executor complete.
16/08/28 12:27:11 INFO mapreduce.Job: Job job_local1915471944_0002 running in uber mode : false
16/08/28 12:27:11 INFO mapreduce.Job:  map 100% reduce 100%
16/08/28 12:27:11 INFO mapreduce.Job: Job job_local1915471944_0002 completed successfully
16/08/28 12:27:11 INFO mapreduce.Job: Counters: 35
 File System Counters
  FILE: Number of bytes read=1311058
  FILE: Number of bytes written=2343873
  FILE: Number of read operations=0
  FILE: Number of large read operations=0
  FILE: Number of write operations=0
  HDFS: Number of bytes read=154592
  HDFS: Number of bytes written=1071
  HDFS: Number of read operations=151
  HDFS: Number of large read operations=0
  HDFS: Number of write operations=16
 Map-Reduce Framework
  Map input records=11
  Map output records=11
  Map output bytes=263
  Map output materialized bytes=291
  Input split bytes=132
  Combine input records=0
  Combine output records=0
  Reduce input groups=5
  Reduce shuffle bytes=291
  Reduce input records=11
  Reduce output records=11
  Spilled Records=22
  Shuffled Maps =1
  Failed Shuffles=0
  Merged Map outputs=1
  GC time elapsed (ms)=0
  Total committed heap usage (bytes)=1061158912
 Shuffle Errors
  BAD_ID=0
  CONNECTION=0
  IO_ERROR=0
  WRONG_LENGTH=0
  WRONG_MAP=0
  WRONG_REDUCE=0
 File Input Format Counters 
  Bytes Read=437
 File Output Format Counters 
  Bytes Written=197
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ 
hadoop@637c83896b9d:~/hadoop-2.7.3$ bin/hdfs dfs -get output output
hadoop@637c83896b9d:~/hadoop-2.7.3$ cat output/*
cat: output/output: Is a directory
1 dfsadmin
hadoop@637c83896b9d:~/hadoop-2.7.3$ 


hadoop@637c83896b9d:~/hadoop-2.7.3$ sbin/stop-dfs.sh
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
hadoop@637c83896b9d:~/hadoop-2.7.3$ 


Commit container and push to hub


ubuntu@node2:~$ docker commit 637c83896b9d nutthaphon/hbase:hdfs
sha256:e3dc0f939d25f633e4bc6a5f1fe395b64736e81a5cf7cc3208d4ce7006983fe0
ubuntu@node2:~$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username (nutthaphon): 
Password: 
Login Succeeded
ubuntu@node2:~$ docker push nutthaphon/hbase:hdfs
The push refers to a repository [docker.io/nutthaphon/hbase]
25c04ffed27a: Pushed 
01e76fc6590f: Layer already exists 
447f88c8358f: Layer already exists 
df9a135a6949: Layer already exists 
dbaa8ea1faf9: Layer already exists 
8a14f84e5837: Layer already exists 
hdfs: digest: sha256:2436da12eadc964f8389ec7b1c868563ec53b7af901d8fb179afb7d72df814ac size: 1578
ubuntu@node2:~$ 


Next  Apache HBase - Pseudo-distributed (Part II)