Friday, 15 May 2015

java - Error of start of the demon of Namenode -



java - Error of start of the demon of Namenode -

my purpose - launch demon of namenode. necessary me work file scheme of hdfs, re-create there files local file system, create folders in hdfs, , requires start of demon of namenode on port specified in configuration /conf/core-site.xml file. launched script

./hadoop namenode

and received result next messages

2013-02-17 12:29:37,493 info org.apache.hadoop.hdfs.server.namenode.namenode: startup_msg: /************************************************************ startup_msg: starting namenode startup_msg: host = one/192.168.1.8 startup_msg: args = [] startup_msg: version = 1.0.1 startup_msg: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785; compiled 'hortonfo' on tue feb 14 08:15:38 utc 2012 ************************************************************/ 2013-02-17 12:29:38,325 info org.apache.hadoop.metrics2.impl.metricsconfig: loaded properties hadoop-metrics2.properties 2013-02-17 12:29:38,400 info org.apache.hadoop.metrics2.impl.metricssourceadapter: mbean source metricssystem,sub=stats registered. 2013-02-17 12:29:38,427 info org.apache.hadoop.metrics2.impl.metricssystemimpl: scheduled snapshot period @ 10 second(s). 2013-02-17 12:29:38,427 info org.apache.hadoop.metrics2.impl.metricssystemimpl: namenode metrics scheme started 2013-02-17 12:29:39,509 info org.apache.hadoop.metrics2.impl.metricssourceadapter: mbean source ugi registered. 2013-02-17 12:29:39,542 warn org.apache.hadoop.metrics2.impl.metricssystemimpl: source name ugi exists! 2013-02-17 12:29:39,633 info org.apache.hadoop.metrics2.impl.metricssourceadapter: mbean source jvm registered. 2013-02-17 12:29:39,635 info org.apache.hadoop.metrics2.impl.metricssourceadapter: mbean source namenode registered. 2013-02-17 12:29:39,704 info org.apache.hadoop.hdfs.util.gset: vm type = 32-bit 2013-02-17 12:29:39,708 info org.apache.hadoop.hdfs.util.gset: 2% max memory = 19.33375 mb 2013-02-17 12:29:39,708 info org.apache.hadoop.hdfs.util.gset: capacity = 2^22 = 4194304 entries 2013-02-17 12:29:39,708 info org.apache.hadoop.hdfs.util.gset: recommended=4194304, actual=4194304 2013-02-17 12:29:42,718 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: fsowner=hadoop 2013-02-17 12:29:42,737 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: supergroup=supergroup 2013-02-17 12:29:42,738 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: ispermissionenabled=true 2013-02-17 12:29:42,937 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: dfs.block.invalidate.limit=100 2013-02-17 12:29:42,940 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: isaccesstokenenabled=false accesskeyupdateinterval=0 min(s), accesstokenlifetime=0 min(s) 2013-02-17 12:29:45,820 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: registered fsnamesystemstatembean , namenodemxbean 2013-02-17 12:29:46,229 info org.apache.hadoop.hdfs.server.namenode.namenode: caching file names occuring more 10 times 2013-02-17 12:29:46,836 info org.apache.hadoop.hdfs.server.common.storage: number of files = 1 2013-02-17 12:29:47,133 info org.apache.hadoop.hdfs.server.common.storage: number of files under construction = 0 2013-02-17 12:29:47,134 info org.apache.hadoop.hdfs.server.common.storage: image file of size 112 loaded in 0 seconds. 2013-02-17 12:29:47,134 info org.apache.hadoop.hdfs.server.common.storage: edits file /tmp/hadoop-hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds. 2013-02-17 12:29:47,163 info org.apache.hadoop.hdfs.server.common.storage: image file of size 112 saved in 0 seconds. 2013-02-17 12:29:47,375 info org.apache.hadoop.hdfs.server.common.storage: image file of size 112 saved in 0 seconds. 2013-02-17 12:29:47,479 info org.apache.hadoop.hdfs.server.namenode.namecache: initialized 0 entries 0 lookups 2013-02-17 12:29:47,480 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: finished loading fsimage in 6294 msecs 2013-02-17 12:29:47,919 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: total number of blocks = 0 2013-02-17 12:29:47,919 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: number of invalid blocks = 0 2013-02-17 12:29:47,920 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: number of under-replicated blocks = 0 2013-02-17 12:29:47,920 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: number of over-replicated blocks = 0 2013-02-17 12:29:47,920 info org.apache.hadoop.hdfs.statechange: state* safe mode termination scan invalid, over- , under-replicated blocks completed in 430 msec 2013-02-17 12:29:47,920 info org.apache.hadoop.hdfs.statechange: state* leaving safe mode after 6 secs. 2013-02-17 12:29:47,920 info org.apache.hadoop.hdfs.statechange: state* network topology has 0 racks , 0 datanodes 2013-02-17 12:29:47,920 info org.apache.hadoop.hdfs.statechange: state* underreplicatedblocks has 0 blocks 2013-02-17 12:29:48,198 info org.apache.hadoop.util.hostsfilereader: refreshing hosts (include/exclude) list 2013-02-17 12:29:48,279 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: replicatequeue queueprocessingstatistics: first cycle completed 0 blocks in 129 msec 2013-02-17 12:29:48,279 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: replicatequeue queueprocessingstatistics: queue flush completed 0 blocks in 129 msec processing time, 129 msec clock time, 1 cycles 2013-02-17 12:29:48,280 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: invalidatequeue queueprocessingstatistics: first cycle completed 0 blocks in 0 msec 2013-02-17 12:29:48,280 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: invalidatequeue queueprocessingstatistics: queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles 2013-02-17 12:29:48,280 info org.apache.hadoop.metrics2.impl.metricssourceadapter: mbean source fsnamesystemmetrics registered. 2013-02-17 12:29:48,711 info org.apache.hadoop.ipc.server: starting socketreader 2013-02-17 12:29:48,836 info org.apache.hadoop.metrics2.impl.metricssourceadapter: mbean source rpcdetailedactivityforport2000 registered. 2013-02-17 12:29:48,836 info org.apache.hadoop.metrics2.impl.metricssourceadapter: mbean source rpcactivityforport2000 registered. 2013-02-17 12:29:48,865 info org.apache.hadoop.hdfs.server.namenode.namenode: namenode at: one/192.168.1.8:2000 2013-02-17 12:30:23,264 info org.mortbay.log: logging org.slf4j.impl.log4jloggeradapter(org.mortbay.log) via org.mortbay.log.slf4jlog 2013-02-17 12:30:25,326 info org.apache.hadoop.http.httpserver: added global filtersafety (class=org.apache.hadoop.http.httpserver$quotinginputfilter) 2013-02-17 12:30:25,727 info org.apache.hadoop.http.httpserver: dfs.webhdfs.enabled = false 2013-02-17 12:30:25,997 info org.apache.hadoop.http.httpserver: port returned webserver.getconnectors()[0].getlocalport() before open() -1. opening listener on 50070 2013-02-17 12:30:26,269 error org.apache.hadoop.security.usergroupinformation: priviledgedactionexception as:hadoop cause:java.net.bindexception: address in utilize 2013-02-17 12:30:26,442 warn org.apache.hadoop.hdfs.server.namenode.fsnamesystem: replicationmonitor thread received interruptedexception.java.lang.interruptedexception: sleep interrupted 2013-02-17 12:30:26,445 info org.apache.hadoop.hdfs.server.namenode.fsnamesystem: number of transactions: 0 total time transactions(ms): 0number of transactions batched in syncs: 0 number of syncs: 0 synctimes(ms): 0 2013-02-17 12:30:26,446 info org.apache.hadoop.ipc.server: stopping server on 2000 2013-02-17 12:30:26,446 info org.apache.hadoop.ipc.metrics.rpcinstrumentation: shut downwards 2013-02-17 12:30:26,616 info org.apache.hadoop.hdfs.server.namenode.decommissionmanager: interrupted monitor java.lang.interruptedexception: sleep interrupted @ java.lang.thread.sleep(native method) @ org.apache.hadoop.hdfs.server.namenode.decommissionmanager$monitor.run(decommissionmanager.java:65) @ java.lang.thread.run(thread.java:722) 2013-02-17 12:30:26,761 error org.apache.hadoop.hdfs.server.namenode.namenode: java.net.bindexception: address in utilize @ sun.nio.ch.net.bind0(native method) @ sun.nio.ch.net.bind(net.java:344) @ sun.nio.ch.net.bind(net.java:336) @ sun.nio.ch.serversocketchannelimpl.bind(serversocketchannelimpl.java:199) @ sun.nio.ch.serversocketadaptor.bind(serversocketadaptor.java:74) @ org.mortbay.jetty.nio.selectchannelconnector.open(selectchannelconnector.java:216) @ org.apache.hadoop.http.httpserver.start(httpserver.java:581) @ org.apache.hadoop.hdfs.server.namenode.namenode$1.run(namenode.java:445) @ org.apache.hadoop.hdfs.server.namenode.namenode$1.run(namenode.java:353) @ java.security.accesscontroller.doprivileged(native method) @ javax.security.auth.subject.doas(subject.java:415) @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1093) @ org.apache.hadoop.hdfs.server.namenode.namenode.starthttpserver(namenode.java:353) @ org.apache.hadoop.hdfs.server.namenode.namenode.initialize(namenode.java:305) @ org.apache.hadoop.hdfs.server.namenode.namenode.<init>(namenode.java:496) @ org.apache.hadoop.hdfs.server.namenode.namenode.createnamenode(namenode.java:1279) @ org.apache.hadoop.hdfs.server.namenode.namenode.main(namenode.java:1288) 2013-02-17 12:30:26,784 info org.apache.hadoop.hdfs.server.namenode.namenode: shutdown_msg: /************************************************************ shutdown_msg: shutting downwards namenode @ one/192.168.1.8 ************************************************************/

help launch demon of namenode farther start of hadoop of application.

2013-02-17 12:30:26,761 error org.apache.hadoop.hdfs.server.namenode.namenode: java.net.bindexception: address in utilize

looks have process running on same port @ 1 name node binds to. means have instance of name node process running.

you should able utilize either jps -v command list running java processes current user, or ps aww | grep java list running java processes.

java linux unix hadoop mapreduce

No comments:

Post a Comment