码迷,mamicode.com
首页 > 其他好文 > 详细

hbase 部署HA各种问题

时间:2019-05-21 21:14:02      阅读:204      评论:0      收藏:0      [点我收藏+]

标签:final   compiler   available   win   XML   出现   org   desire   control   

1、jar包问题,hadoop高版本中的htrace没有需要的类,需要从hadoop低版本里面复制一个jar包。

2、hdfs://mycluster/hbase    mycluster必须在hosts文件中指定ip地址,否则无法找到

3、Operation category READ is not supported in state standby。

hmaster无法在一个standby节点上启动。必须把启动hbase的这个服务器的namenode设置为active

5、出现如下问题:The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of ‘hbase.procedure.store.wal.use.hsync‘ to set the desired level of robustness and ensure the config value of ‘hbase.wal.dir‘ points to a FileSystem mount that can provide it.

 

在hbase-site.xml中添加

<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>

 

 

4、这个问题依然没有解决:regionserver只能在hmaster的那个服务器上启动,其他服务器无法启动,报的错误是:

Tue May 21 08:19:32 EDT 2019 Starting regionserver on node03
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3819
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3819
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
2019-05-21 08:19:49,143 INFO [main] regionserver.HRegionServer: STARTING executorService HRegionServer
2019-05-21 08:19:49,189 INFO [main] util.VersionInfo: HBase 2.1.4
2019-05-21 08:19:49,189 INFO [main] util.VersionInfo: Source code repository git://bcd8553a5734/opt/hbase-rm/output/hbase revision=5b7722f8551bca783adb36a920ca77e417ca99d1
2019-05-21 08:19:49,189 INFO [main] util.VersionInfo: Compiled by hbase-rm on Tue Mar 19 19:05:06 UTC 2019
2019-05-21 08:19:49,189 INFO [main] util.VersionInfo: From source with checksum d210900ccde556a0bd80acd860998807
2019-05-21 08:19:54,228 INFO [main] util.ServerCommandLine: hbase.tmp.dir: /tmp/hbase-root
2019-05-21 08:19:54,228 INFO [main] util.ServerCommandLine: hbase.rootdir: hdfs://mycluster/hbase
2019-05-21 08:19:54,228 INFO [main] util.ServerCommandLine: hbase.cluster.distributed: true
2019-05-21 08:19:54,228 INFO [main] util.ServerCommandLine: hbase.zookeeper.quorum: node02,node03,node04
2019-05-21 08:19:54,262 INFO [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-root-regionserver-node03.log
2019-05-21 08:19:54,262 INFO [main] util.ServerCommandLine: env:PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
2019-05-21 08:19:54,262 INFO [main] util.ServerCommandLine: env:JAVA_HOME=/usr/lib/jdk1.8.0_211
2019-05-21 08:19:54,262 INFO [main] util.ServerCommandLine: env:LANG=en_US.UTF-8
2019-05-21 08:19:54,262 INFO [main] util.ServerCommandLine: env:XDG_SESSION_ID=76
2019-05-21 08:19:54,262 INFO [main] util.ServerCommandLine: env:SELINUX_LEVEL_REQUESTED=
2019-05-21 08:19:54,262 INFO [main] util.ServerCommandLine: env:SELINUX_ROLE_REQUESTED=
2019-05-21 08:19:54,262 INFO [main] util.ServerCommandLine: env:MAIL=/var/mail/root
2019-05-21 08:19:54,262 INFO [main] util.ServerCommandLine: env:LOGNAME=root
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:JVM_PID=11288
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:HBASE_REST_OPTS=
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:PWD=/root
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:HBASE_ROOT_LOGGER=INFO,RFA
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:LESSOPEN=||/usr/bin/lesspipe.sh %s
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:SHELL=/bin/bash
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:SELINUX_USE_CURRENT_RANGE=
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:HBASE_ENV_INIT=true
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=root
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:HBASE_ZNODE_FILE=/tmp/hbase-root-regionserver.znode
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:SSH_CLIENT=192.168.19.101 51188 22
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-root-regionserver-node03
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:HBASE_LOG_DIR=/usr/lib/hbase-2.1.4/bin/../logs
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:USER=root
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: ib/hbase-2.1.4/bin/../lib/paranamer-2.3.jar:/usr/lib/hbase-2.1.4/bin/../lib/protobuf-java-2.5.0.jar:/usr/lib/hbase-2.1.4/bin/../lib/snappy-java-1.0.5.jar:/usr/lib/hbase-2.1.4/bin/../lib/spymemcached-2.12.2.jar:/usr/lib/hbase-2.1.4/bin/../lib/validation-api-1.1.0.Final.jar:/usr/lib/hbase-2.1.4/bin/../lib/xmlenc-0.52.jar:/usr/lib/hbase-2.1.4/bin/../lib/xz-1.0.jar:/usr/lib/hbase-2.1.4/bin/../lib/zookeeper-3.4.10.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/audience-annotations-0.5.0.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/commons-logging-1.2.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/findbugs-annotations-1.3.9-1.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/log4j-1.2.17.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/slf4j-api-1.7.25.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:HBASE_MANAGES_ZK=false
2019-05-21 08:19:54,263 INFO [main] util.ServerCommandLine: env:SSH_CONNECTION=192.168.19.101 51188 192.168.19.103 22
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:HBASE_AUTOSTART_FILE=/tmp/hbase-root-regionserver.autostart
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:HBASE_NICENESS=0
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:HBASE_OPTS= -XX:+UseConcMarkSweepGC -Dhbase.log.dir=/usr/lib/hbase-2.1.4/bin/../logs -Dhbase.log.file=hbase-root-regionserver-node03.log -Dhbase.home.dir=/usr/lib/hbase-2.1.4/bin/.. -Dhbase.id.str=root -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:XDG_RUNTIME_DIR=/run/user/0
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:HBASE_HOME=/usr/lib/hbase-2.1.4/bin/..
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:SHLVL=3
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:HOME=/root
2019-05-21 08:19:54,264 INFO [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2019-05-21 08:19:54,331 INFO [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.211-b12
2019-05-21 08:19:54,331 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/usr/lib/hbase-2.1.4/bin/../logs, -Dhbase.log.file=hbase-root-regionserver-node03.log, -Dhbase.home.dir=/usr/lib/hbase-2.1.4/bin/.., -Dhbase.id.str=root, -Dhbase.root.logger=INFO,RFA, -Dhbase.security.logger=INFO,RFAS]
2019-05-21 08:19:59,841 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
2019-05-21 08:20:00,318 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-05-21 08:20:04,314 INFO [main] regionserver.RSRpcServices: regionserver/node03:16020 server-side Connection retries=45
2019-05-21 08:20:04,620 INFO [main] ipc.RpcExecutor: Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=3, maxQueueLength=300, handlerCount=30
2019-05-21 08:20:04,655 INFO [main] ipc.RpcExecutor: Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=300, handlerCount=20
2019-05-21 08:20:04,655 INFO [main] ipc.RpcExecutor: Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=300, handlerCount=3
2019-05-21 08:20:04,655 INFO [main] ipc.RpcExecutor: Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=300, handlerCount=1
2019-05-21 08:20:05,028 INFO [main] ipc.RpcServerFactory: Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService
2019-05-21 08:20:06,202 INFO [main] io.ByteBufferPool: Created with bufferSize=64 KB and maxPoolSize=1.88 KB
2019-05-21 08:20:07,766 INFO [main] ipc.NettyRpcServer: Bind to /192.168.19.103:16020
2019-05-21 08:20:08,938 INFO [main] hfile.CacheConfig: Allocating onheap LruBlockCache size=95.15 MB, blockSize=64 KB
2019-05-21 08:20:09,172 INFO [main] hfile.CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=72.34 KB, freeSize=95.08 MB, maxSize=95.15 MB, heapSize=72.34 KB, minSize=90.39 MB, minFactor=0.95, multiSize=45.20 MB, multiFactor=0.5, singleSize=22.60 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2019-05-21 08:20:09,174 INFO [main] hfile.CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=72.34 KB, freeSize=95.08 MB, maxSize=95.15 MB, heapSize=72.34 KB, minSize=90.39 MB, minFactor=0.95, multiSize=45.20 MB, multiFactor=0.5, singleSize=22.60 MB, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2019-05-21 08:20:13,885 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2019-05-21 08:20:13,888 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2019-05-21 08:20:15,450 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=regionserver:16020 connecting to ZooKeeper ensemble=node02:2181,node03:2181,node04:2181
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=node03
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_211
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jdk1.8.0_211/jre
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: ib/hbase-2.1.4/bin/../lib/paranamer-2.3.jar:/usr/lib/hbase-2.1.4/bin/../lib/protobuf-java-2.5.0.jar:/usr/lib/hbase-2.1.4/bin/../lib/snappy-java-1.0.5.jar:/usr/lib/hbase-2.1.4/bin/../lib/spymemcached-2.12.2.jar:/usr/lib/hbase-2.1.4/bin/../lib/validation-api-1.1.0.Final.jar:/usr/lib/hbase-2.1.4/bin/../lib/xmlenc-0.52.jar:/usr/lib/hbase-2.1.4/bin/../lib/xz-1.0.jar:/usr/lib/hbase-2.1.4/bin/../lib/zookeeper-3.4.10.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/audience-annotations-0.5.0.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/commons-logging-1.2.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/findbugs-annotations-1.3.9-1.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/log4j-1.2.17.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/slf4j-api-1.7.25.jar:/usr/lib/hbase-2.1.4/bin/../lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2019-05-21 08:20:15,649 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
2019-05-21 08:20:15,650 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=root
2019-05-21 08:20:15,650 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/root
2019-05-21 08:20:15,650 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/root
2019-05-21 08:20:15,651 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=node02:2181,node03:2181,node04:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@1517f633
2019-05-21 08:20:16,845 INFO [main-SendThread(node02:2181)] zookeeper.ClientCnxn: Opening socket connection to server node02/192.168.19.102:2181. Will not attempt to authenticate using SASL (unknown error)
2019-05-21 08:20:17,033 INFO [main-SendThread(node02:2181)] zookeeper.ClientCnxn: Socket connection established to node02/192.168.19.102:2181, initiating session
2019-05-21 08:20:17,107 INFO [main-SendThread(node02:2181)] zookeeper.ClientCnxn: Session establishment complete on server node02/192.168.19.102:2181, sessionid = 0x16ad987c35d000d, negotiated timeout = 40000
2019-05-21 08:20:17,812 INFO [main] util.log: Logging initialized @44432ms
2019-05-21 08:20:18,132 INFO [main] http.HttpRequestLog: Http request log for http.requests.regionserver is not defined
2019-05-21 08:20:18,184 INFO [main] http.HttpServer: Added global filter ‘safety‘ (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2019-05-21 08:20:18,185 INFO [main] http.HttpServer: Added global filter ‘clickjackingprevention‘ (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter)
2019-05-21 08:20:18,189 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver
2019-05-21 08:20:18,189 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2019-05-21 08:20:18,189 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2019-05-21 08:20:18,321 INFO [main] http.HttpServer: ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint.
2019-05-21 08:20:18,472 INFO [main] http.HttpServer: Jetty bound to port 16030
2019-05-21 08:20:18,474 INFO [main] server.Server: jetty-9.3.25.v20180904, build timestamp: 2018-09-04T17:11:46-04:00, git hash: 3ce520221d0240229c862b122d2b06c12a625732
2019-05-21 08:20:18,680 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@1698d7c0{/logs,file:///usr/lib/hbase-2.1.4/logs/,AVAILABLE}
2019-05-21 08:20:18,680 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@87abc48{/static,file:///usr/lib/hbase-2.1.4/hbase-webapps/static/,AVAILABLE}
2019-05-21 08:20:21,094 INFO [main] handler.ContextHandler: Started o.e.j.w.WebAppContext@234a8f27{/,file:///usr/lib/hbase-2.1.4/hbase-webapps/regionserver/,AVAILABLE}{file:/usr/lib/hbase-2.1.4/hbase-webapps/regionserver}
2019-05-21 08:20:21,137 INFO [main] server.AbstractConnector: Started ServerConnector@68dcfd52{HTTP/1.1,[http/1.1]}{0.0.0.0:16030}
2019-05-21 08:20:21,138 INFO [main] server.Server: Started @47758ms
2019-05-21 08:20:21,541 INFO [regionserver/node03:16020] regionserver.HRegionServer: ClusterId : f0e1b7be-4983-43b4-887c-f7bd61f5d3cd
2019-05-21 08:20:22,478 INFO [ReadOnlyZKClient-node02:2181,node03:2181,node04:2181@0x38e26d79] zookeeper.ZooKeeper: Initiating client connection, connectString=node02:2181,node03:2181,node04:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$43/1339545528@ad54026
2019-05-21 08:20:22,487 INFO [ReadOnlyZKClient-node02:2181,node03:2181,node04:2181@0x38e26d79-SendThread(node04:2181)] zookeeper.ClientCnxn: Opening socket connection to server node04/192.168.19.104:2181. Will not attempt to authenticate using SASL (unknown error)
2019-05-21 08:20:22,495 INFO [ReadOnlyZKClient-node02:2181,node03:2181,node04:2181@0x38e26d79-SendThread(node04:2181)] zookeeper.ClientCnxn: Socket connection established to node04/192.168.19.104:2181, initiating session
2019-05-21 08:20:22,559 INFO [ReadOnlyZKClient-node02:2181,node03:2181,node04:2181@0x38e26d79-SendThread(node04:2181)] zookeeper.ClientCnxn: Session establishment complete on server node04/192.168.19.104:2181, sessionid = 0x36ad987c49d0015, negotiated timeout = 40000
2019-05-21 08:20:22,877 INFO [regionserver/node03:16020] regionserver.RegionServerCoprocessorHost: System coprocessor loading is enabled
2019-05-21 08:20:22,877 INFO [regionserver/node03:16020] regionserver.RegionServerCoprocessorHost: Table coprocessor loading is enabled
2019-05-21 08:20:23,052 INFO [regionserver/node03:16020] regionserver.HRegionServer: reportForDuty to master=node01,16000,1558441163486 with port=16020, startcode=1558441194958
2019-05-21 08:20:26,481 INFO [regionserver/node03:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider
2019-05-21 08:20:27,012 INFO [regionserver/node03:16020] regionserver.HRegionServer: ***** STOPPING region server ‘node03,16020,1558441194958‘ *****
2019-05-21 08:20:27,012 INFO [regionserver/node03:16020] regionserver.HRegionServer: STOPPED: Failed initialization
2019-05-21 08:20:27,060 ERROR [regionserver/node03:16020] regionserver.HRegionServer: Failed init
org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1951)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1434)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3096)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1154)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:3346)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1570)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:970)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1951)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1434)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3096)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1154)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372)
at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1829)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1540)
... 2 more
2019-05-21 08:20:27,158 ERROR [regionserver/node03:16020] regionserver.HRegionServer: em.getFileInfo(FSNamesystem.java:3096)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1154)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
*****
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1951)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1434)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3096)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1154)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372)
at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1829)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1540)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:970)
at java.lang.Thread.run(Thread.java:748)
2019-05-21 08:20:27,160 ERROR [regionserver/node03:16020] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2019-05-21 08:20:28,083 INFO [regionserver/node03:16020] regionserver.HRegionServer:
"exceptions.ScannerResetException" : 0,
"RequestSize_num_ops" : 0,
"RequestSize_min" : 0,
"RequestSize_max" : 0,
"RequestSize_mean" : 0,
"RequestSize_25th_percentile" : 0,
"RequestSize_median" : 0,
"RequestSize_75th_percentile" : 0,
"RequestSize_90th_percentile" : 0,
"RequestSize_95th_percentile" : 0,
"RequestSize_98th_percentile" : 0,
"RequestSize_99th_percentile" : 0,
"RequestSize_99.9th_percentile" : 0,
"sentBytes" : 0,
"QueueCallTime_num_ops" : 0,
"QueueCallTime_min" : 0,
"QueueCallTime_max" : 0,
"QueueCallTime_mean" : 0,
"QueueCallTime_25th_percentile" : 0,
"QueueCallTime_median" : 0,
"QueueCallTime_75th_percentile" : 0,
"QueueCallTime_90th_percentile" : 0,
"QueueCallTime_95th_percentile" : 0,
"QueueCallTime_98th_percentile" : 0,
"QueueCallTime_99th_percentile" : 0,
"QueueCallTime_99.9th_percentile" : 0,
"authenticationFailures" : 0
} ],
"beans" : [ ],
"beans" : [ ]
}
2019-05-21 08:20:28,664 WARN [regionserver/node03:16020] regionserver.HRegionServer: Initialize abort timeout task failed
java.lang.IllegalAccessException: Class org.apache.hadoop.hbase.regionserver.HRegionServer can not access a member of class org.apache.hadoop.hbase.regionserver.HRegionServer$SystemExitWhenAbortTimeout with modifiers "private"
at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:102)
at java.lang.reflect.AccessibleObject.slowCheckMemberAccess(AccessibleObject.java:296)
at java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:288)
at java.lang.reflect.Constructor.newInstance(Constructor.java:413)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1044)
at java.lang.Thread.run(Thread.java:748)
2019-05-21 08:20:28,665 INFO [regionserver/node03:16020] regionserver.HRegionServer: Stopping infoServer
2019-05-21 08:20:28,773 INFO [regionserver/node03:16020] handler.ContextHandler: Stopped o.e.j.w.WebAppContext@234a8f27{/,null,UNAVAILABLE}{file:/usr/lib/hbase-2.1.4/hbase-webapps/regionserver}
2019-05-21 08:20:28,816 INFO [regionserver/node03:16020] server.AbstractConnector: Stopped ServerConnector@68dcfd52{HTTP/1.1,[http/1.1]}{0.0.0.0:16030}
2019-05-21 08:20:28,817 INFO [regionserver/node03:16020] handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@87abc48{/static,file:///usr/lib/hbase-2.1.4/hbase-webapps/static/,UNAVAILABLE}
2019-05-21 08:20:28,818 INFO [regionserver/node03:16020] handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@1698d7c0{/logs,file:///usr/lib/hbase-2.1.4/logs/,UNAVAILABLE}
2019-05-21 08:20:28,823 INFO [regionserver/node03:16020] flush.RegionServerFlushTableProcedureManager: Stopping region server flush procedure manager abruptly.
2019-05-21 08:20:28,823 INFO [regionserver/node03:16020] snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager abruptly.
2019-05-21 08:20:28,823 INFO [regionserver/node03:16020] regionserver.HRegionServer: aborting server node03,16020,1558441194958
2019-05-21 08:20:28,831 INFO [ReadOnlyZKClient-node02:2181,node03:2181,node04:2181@0x38e26d79] zookeeper.ZooKeeper: Session: 0x36ad987c49d0015 closed
2019-05-21 08:20:28,831 INFO [ReadOnlyZKClient-node02:2181,node03:2181,node04:2181@0x38e26d79-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x36ad987c49d0015
2019-05-21 08:20:28,850 INFO [regionserver/node03:16020] regionserver.HRegionServer: stopping server node03,16020,1558441194958; all regions closed.
2019-05-21 08:20:28,851 INFO [regionserver/node03:16020] hbase.ChoreService: Chore service for: regionserver/node03:16020 had [] on shutdown
2019-05-21 08:20:28,855 INFO [regionserver/node03:16020] ipc.NettyRpcServer: Stopping server on /192.168.19.103:16020
2019-05-21 08:20:28,965 INFO [regionserver/node03:16020] zookeeper.ZooKeeper: Session: 0x16ad987c35d000d closed
2019-05-21 08:20:28,965 INFO [regionserver/node03:16020] regionserver.HRegionServer: Exiting; stopping=node03,16020,1558441194958; zookeeper connection closed.
2019-05-21 08:20:28,965 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:67)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3047)
2019-05-21 08:20:28,977 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x16ad987c35d000d
2019-05-21 08:20:28,981 INFO [Thread-4] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@7af1cd63
2019-05-21 08:20:28,981 INFO [Thread-4] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2019-05-21 08:20:28,989 INFO [Thread-4] regionserver.ShutdownHook: Shutdown hook finished.

hbase 部署HA各种问题

标签:final   compiler   available   win   XML   出现   org   desire   control   

原文地址:https://www.cnblogs.com/junning/p/10902216.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!