1、报错信息
2014-02-24 12:15:48,507 WARN [Thread-2] util.DynamicClassLoader (DynamicClassLoader.java:<init>(106)) - Failed to identify the fs of dir hdfs://fulonghadoop/hbase/lib, ignored
java.io.IOException: No FileSystem for scheme: hdfs
解决办法
在配置文件中加入
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.2</version>
</dependency>
在hdfs-site.xml或者core-site.xml中加入
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
</property>
2、报错信息
ERROR [ClientFinalizer-shutdown-hook] hdfs.DFSClient: Failed to close inode 148879
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /hbase/oldWALs/hadoop-4%2C16020%2C1544498293590.default.1545167908560 (inode 148879): File is not open for writing. Holder DFSClient_NONMAPREDUCE_328068851_1 does not have any open files
解决办法 (1)调整HDFS中配置参数
dfs.datanode.max.transfer.threads
dfs.datanode.max.xcievers
(2)dfs.datanode.max.xcievers值修改为8192(之前为4096)根据实际情况修改!
(3)设置打开文件的最大数值
ulimit -a
ulimit -n 65535
vim /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
//修改hdfs配置
<property>
<name>dfs.datanode.max.transfer.threads</name>
<value>40000</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>65535</value>
</property>
3、报错
INFO [regionserver/hadoop-4/192.168.168.86:16020-SendThread(hadoop-6:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 26667ms for sessionid 0x3682c0f03c60033, closing socket connection and attempting reconnect
2019-01-09 21:54:13,016 INFO [main-SendThread(hadoop-6:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 26669ms for sessionid 0x3682c0f03c60032, closing socket connection and attempting reconnect
2019-01-09 21:54:13,018 INFO [LeaseRenewer:work@cluster1] retry.RetryInvocationHandler: Exception while invoking renewLease of class ClientNamenodeProtocolTranslatorPB over hadoop-1/192.168.168.83:9000. Trying to fail over immediately.
org.apache.hadoop.net.ConnectTimeoutException: Call From hadoop-4/192.168.168.86 to hadoop-1:9000 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=hadoop-1/192.168.168.83:9000]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout
解决办法(怀疑是HBASE合并hfile时导致的超时现象)
hbase修改
<property>
<name>hbase.rpc.timeout</name>
<value>3600000</value>
</property>
hdfs修改
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>3600000</value>
</property>
<property>
<name>dfs.socket.timeout</name>
<value>3600000</value>
</property>
|