IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 大数据 -> java.net.ConnectException: Call From localhost/127.0.0.1 to 192.168.232.138:9000 failed on connectio -> 正文阅读

[大数据]java.net.ConnectException: Call From localhost/127.0.0.1 to 192.168.232.138:9000 failed on connectio

java.net.ConnectException: Call From localhost/127.0.0.1 to 192.168.232.138:9000 failed on connection exception: java.net.ConnectException: Connection refused; 
22/05/03 00:34:57 INFO client.RMProxy: Connecting to ResourceManager at /192.168.232.138:8032
java.net.ConnectException: Call From localhost/127.0.0.1 to 192.168.232.138:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:827)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:757)
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
	at org.apache.hadoop.ipc.Client.call(Client.java:1495)
	at org.apache.hadoop.ipc.Client.call(Client.java:1394)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
	at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:800)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1673)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1524)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1521)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1521)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1632)
	at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
	at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:279)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:145)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
	at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:244)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:158)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:814)
	at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:423)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1610)
	at org.apache.hadoop.ipc.Client.call(Client.java:1441)
	... 45 more

Connection Refused

连接拒绝

You get a ConnectionRefused Exception when there is a machine at the address specified, but there is no program listening on the specific TCP port the client is using -and there is no firewall in the way silently dropping TCP connection requests. If you do not know what a TCP connection request is, please consult the specification.

当指定的地址处有一台机器,但是没有程序监听客户机正在使用的特定TCP端口—并且没有防火墙以静默的方式丢弃TCP连接请求时,您会得到ConnectionRefused Exception。 如果您不知道什么是TCP连接请求,请参阅规范。 ?

Unless there is a configuration error at either end, a common cause for this is the Hadoop service isn't running.

除非两端都有配置错误,否则常见的原因是Hadoop服务没有运行。 ?

This stack trace is very common when the cluster is being shut down -because at that point Hadoop services are being torn down across the cluster, which is visible to those services and applications which haven't been shut down themselves. Seeing this error message during cluster shutdown is not anything to worry about.

当集群关闭时,这种堆栈跟踪非常常见——因为此时Hadoop服务在整个集群中被关闭,这对那些自己没有关闭的服务和应用程序是可见的。 不用担心在集群关闭期间看到这个错误消息。 ?

If the application or cluster is not working, and this message appears in the log, then it is more serious.

如果应用程序或集群不工作,并且此消息出现在日志中,则问题更为严重。

The exception text declares both the hostname and the port to which the connection failed. The port can be used to identify the service. For example, port 9000 is the HDFS port. Consult the Ambari port reference, and/or those of the supplier of your Hadoop management tools.

异常文本声明了主机名和连接失败的端口。 端口可以用来识别服务。 例如,9000端口是HDFS的端口。 请查阅Ambari端口参考资料和/或Hadoop管理工具供应商的参考资料。 ?

Check the hostname the client using is correct. If it's in a Hadoop configuration option: examine it carefully, try doing an ping by hand.

检查客户端使用的主机名是否正确。 如果它在Hadoop配置选项中:仔细检查它,尝试手动执行ping操作。
?? ?
Check the IP address the client is trying to talk to for the hostname is correct.

检查客户端试图与之通信的IP地址,确认主机名是否正确。

Make sure the destination address in the exception isn't 0.0.0.0 -this means that you haven't actually configured the client with the real address for that service, and instead it is picking up the server-side property telling it to listen on every port for connections.

确保异常中的目的地址不是0.0.0.0—这意味着您实际上没有为客户端配置该服务的真实地址,而是获取服务器端属性,告诉它侦听每个端口的连接。

If the error message says the remote service is on "127.0.0.1" or "localhost" that means the configuration file is telling the client that the service is on the local server. If your client is trying to talk to a remote system, then your configuration is broken.

如果错误消息说远程服务在“127.0.0.1”或“localhost”上,这意味着配置文件告诉客户端服务在本地服务器上。 如果您的客户端试图与远程系统通信,那么您的配置就被破坏了。

Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this).

检查/etc/hosts中是否有映射到127.0.0.1或127.0.1.1的主机名条目(Ubuntu在这方面臭名昭著)。

Check the port the client is trying to talk to using matches that the server is offering a service on. The netstat command is useful there.

使用服务器提供服务的端口检查客户端试图与之通信的端口。 netstat命令在这里很有用。

On the server, try a telnet localhost <port> to see if the port is open there.

在服务器上,尝试telnet localhost <端口>,以查看该端口在那里是否打开。

On the client, try a telnet <server> <port> to see if the port is accessible remotely.

在客户机上,尝试telnet <server> <port>,以查看该端口是否可以远程访问。

Try connecting to the server/port from a different machine, to see if it just the single client misbehaving.

尝试从另一台机器连接到服务器/端口,看看是否只是单个客户机行为不正常。

If your client and the server are in different subdomains, it may be that the configuration of the service is only publishing the basic hostname, rather than the Fully Qualified Domain Name. The client in the different subdomain can be unintentionally attempt to bind to a host in the local subdomain —and failing.

如果您的客户端和服务器在不同的子域名中,服务的配置可能只是发布基本主机名,而不是完全限定的域名。 不同子域中的客户端可能无意中试图绑定到本地子域中的主机—并且失败。

If you are using a Hadoop-based product from a third party, -please use the support channels provided by the vendor.

如果您正在使用第三方提供的基于hadoop的产品,请使用供应商提供的支持渠道。

Please do not file bug reports related to your problem, as they will be closed as Invalid

请不要提交与您的问题相关的bug报告,因为它们将被关闭为无效

See also Server Overflow

参见服务器溢出

None of these are Hadoop problems, they are hadoop, host, network and firewall configuration issues. As it is your cluster, only you can find out and track down the problem.

这些都不是Hadoop的问题,而是Hadoop、主机、网络和防火墙的配置问题。 因为这是您的集群,只有您才能发现并跟踪问题。

  大数据 最新文章
实现Kafka至少消费一次
亚马逊云科技:还在苦于ETL?Zero ETL的时代
初探MapReduce
【SpringBoot框架篇】32.基于注解+redis实现
Elasticsearch:如何减少 Elasticsearch 集
Go redis操作
Redis面试题
专题五 Redis高并发场景
基于GBase8s和Calcite的多数据源查询
Redis——底层数据结构原理
上一篇文章      下一篇文章      查看所有文章
加:2022-05-05 11:25:14  更:2022-05-05 11:28:36 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/24 1:16:53-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码