QBoard » Big Data » Big Data - Hadoop Eco-System » hadoop error coming when starting hadoop

hadoop error coming when starting hadoop


  • Hi i can't resolve my problem when running hadoop with start-all.sh

    rochdi@127:~$ start-all.sh

    /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

    starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-namenode-127.0.0.1

    localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

    localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-datanode-127.0.0.1

    localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

    localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-secondarynamenode-127.0.0.1

    /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

    starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-jobtracker-127.0.0.1

    localhost: /usr/local/hadoop/bin/hadoop-daemon.sh: line 62: [: localhost: integer expression expected

    localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-rochdi-tasktracker-127.0.0.1

    localhost: Erreur : impossible de trouver ou charger la classe principale localhost

    path:

    rochdi@127:~$ echo "$PATH"
    
    /usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/hadoop/bin:/usr/local/hadoop/lib
    

    before coming error i change hostname file as:

    127.0.0.1 localhost
    127.0.1.1 ubuntu.local ubuntu

    and i configured my bashrc file as

    export HADOOP_PREFIX=/usr/local/hadoop

    export PATH=$PATH:$HADOOP_PREFIX/bin
    
    export JAVA_HOME=/usr/lib/jvm/java-7-oracle
     

    and jps command

    rochdi@127:~$ jps
         3427 Jps

     

    help me please

     
      December 18, 2021 12:07 PM IST
    0
  • i resolve the problem i just change my hostname but and all nodes start but when i stop them i have this message:

    rochdi@acer:~$ jps
    4605 NameNode
    5084 SecondaryNameNode
    5171 JobTracker
    5460 Jps
    5410 TaskTracker
    rochdi@acer:~$ stop-all.sh 
    stopping jobtracker
    localhost: no tasktracker to stop
    stopping namenode
    localhost: no datanode to stop
    localhost: stopping secondarynamenode
      December 23, 2021 1:33 PM IST
    0
  • The OP posted that the main error was fixed by changing the hostname (answered Dec 6 '13 at 14:19). It suggests issues with the file /etc/hosts and the file 'slaves' in the master. Remember that each host name in the cluster must match the values in those files. When an xml is wrongly configured, it normally throw up Connectivity issues between the ports of the services.

    From the message "no ${SERVICE} to stop" most probably the previous start-all.sh leaved the java processes orphaned. The solutions is to stop manually each process, e.g.

    $ kill -9 4605
    

     

    and then execute again the start-all.sh command

    It's important to mention that this is an old question, and currently we have the version 2 and 3 of Hadoop. And I strongly recommend using on of the latest version.

      January 5, 2022 2:23 PM IST
    0
  • after extract hadoop tar file open ~/bashrc file and add following at the end of file

    export HADOOP_HOME=/usr/local/hadoop 
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
    export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin 
    export HADOOP_INSTALL=$HADOOP_HOME

     

    then,

    edit file $HADOOP_HOME/etc/hadoop/core-site.xml add following config then start hadoop

    <configuration>
    
       <property>
          <name>fs.default.name </name>
          <value> hdfs://localhost:9000 </value> 
       </property>
    
    </configuration>

     

    still problem the use this link click here

     
      January 10, 2022 12:20 PM IST
    0
  • Use server IP address instead of using localhost in core-site.xml and check your entries in etc/hosts and slaves file.

     
      January 17, 2022 1:53 PM IST
    0