码迷,mamicode.com
首页 > Web开发 > 详细

webhdfs追加写HDFS异常

时间:2015-03-04 01:01:25      阅读:550      评论:0      收藏:0      [点我收藏+]

标签:

问题

{:timestamp=>"2015-03-04T00:02:47.224000+0800", :message=>"Retrying webhdfs write for multiple times. Maybe you should increase retry_interval or reduce number of workers.", :level=>:warn}
{:timestamp=>"2015-03-04T00:02:47.751000+0800", :message=>"Retrying webhdfs write for multiple times. Maybe you should increase retry_interval or reduce number of workers.", :level=>:warn}
{:timestamp=>"2015-03-04T00:02:48.788000+0800", :message=>"Retrying webhdfs write for multiple times. Maybe you should increase retry_interval or reduce number of workers.", :level=>:warn}
{:timestamp=>"2015-03-04T00:02:50.325000+0800", :message=>"Retrying webhdfs write for multiple times. Maybe you should increase retry_interval or reduce number of workers.", :level=>:warn}
{:timestamp=>"2015-03-04T00:02:52.361000+0800", :message=>"Max write retries reached. Exception: {\"RemoteException\":{\"exception\":\"AlreadyBeingCreatedException\",\"javaClassName\":\"org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException\",\"message\":\"Failed to create file [/user/yimr/2015-03-03/16.log] for [DFSClient_NONMAPREDUCE_1517029404_16] for client [192.168.2.207], because this file is already being created by [DFSClient_NONMAPREDUCE_-190688369_16] on [192.168.2.207]\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2636)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2462)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2700)\\n\\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2663)\\n\\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:559)\\n\\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:388)\\n\\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\\n\\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)\\n\\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)\\n\\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)\\n\\tat java.security.AccessController.doPrivileged(Native Method)\\n\\tat javax.security.auth.Subject.doAs(Subject.java:415)\\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)\\n\\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)\\n\"}}", :level=>:error}

 

 

原因:

1台机器,数据备份默认设置为3,通过webhdfs写报错。修改备份数为1正常。

webhdfs追加写HDFS异常

标签:

原文地址:http://www.cnblogs.com/osroot/p/4312226.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!