IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 大数据 -> hadoop 单机式配置 + 伪分布式配置 -> 正文阅读

[大数据]hadoop 单机式配置 + 伪分布式配置

安装教程可以看我之前的博客

我的系统是Ubuntu 21.04 KDE ,但只要你的系统是ubuntu,这些命令就应该是通用的。

非分布式配置(单机模式)

summary

# 把hadoop的所有配置xml文件作为输入,复制到新建的input文件夹里
mkdir ./input && cp /opt/software/hadoop-3.3.1/etc/hadoop/*.xml ./input
# 运行grep例子,将input文件中的所有文件作为输入,筛选符合表达式dfs[a-z.]+的单词并统计出现的次数,输出到output文件夹
hadoop jar /opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep ./input ./output 'dfs[a-z.]+'
# 查看结果
cat ./output/*
ls ./output/

执行成功后如下所示,输出了作业的相关信息,输出的结果是符合正则的单词 dfsadmin 出现了1次

bernard@aqua:~/test$ cat ./output/*  
1       dfsadmin

detail

bernard@aqua:~/test$ screenfetch # 展示一下系统信息
                          ./+o+-       bernard@aqua
                  yyyyy- -yyyyyy+      OS: Ubuntu 21.04 hirsute
               ://+//-yyyyyyo      Kernel: x86_64 Linux 5.11.0-22-generic
           .++ .:/++++++/-.+sss/`      Uptime: 2d 21h 33m
         .:++o:  /++++++++/:--:/-      Packages: 2117
        o:+o+:++.`..```.-/oo+++++/     Shell: bash 5.1.4
       .:+o:+o/.          `+sssoo+/    Resolution: 1366x768
  .++/+:+oo+o:`             /sssooo.   DE: KDE 5.80.0 / Plasma 5.21.4
 /+++//+:`oo+o               /::--:.   WM: KWin
 \+/+o+++`o++o               ++.   GTK Theme: Breeze [GTK2/3]
  .++.o+++oo+:`             /dddhhh.   Icon Theme: breeze
       .+.o+oo:.          `oddhhhh+    Disk: 17G / 147G (12%)
        \+.++o+o``-````.:ohdhhhhh+     CPU: Intel Core i3-4030U @ 4x 1.8GHz [46.0°C]
         `:o+++ `ohhhhhhhhyo++os:      GPU: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 0b)
           .o:`.syhhhhhhh/.oo++o`      RAM: 2336MiB / 3834MiB
               /osyyyyyyo++ooo+++/    
                   `````+oo+++o\:    
                          `oo++.
bernard@aqua:~$ hadoop version # 查看你安装好了没 以及安装到文件路径
Hadoop 3.3.1
Source code repository https://github.com/apache/hadoop.git -r a3b9c37a397ad4188041dd80621bdeefc46885f2
Compiled by ubuntu on 2021-06-15T05:13Z
Compiled with protoc 3.7.1
From source with checksum 88a4ddb2299aca054416d6b7f81ca55
This command was run using /opt/software/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1.jar

bernard@aqua:~$ ls /opt/software/hadoop-3.3.1/share/hadoop/mapreduce/
hadoop-mapreduce-client-app-3.3.1.jar
hadoop-mapreduce-client-common-3.3.1.jar
hadoop-mapreduce-client-core-3.3.1.jar
hadoop-mapreduce-client-hs-3.3.1.jar
hadoop-mapreduce-client-hs-plugins-3.3.1.jar
hadoop-mapreduce-client-jobclient-3.3.1.jar
hadoop-mapreduce-client-jobclient-3.3.1-tests.jar
hadoop-mapreduce-client-nativetask-3.3.1.jar
hadoop-mapreduce-client-shuffle-3.3.1.jar
hadoop-mapreduce-client-uploader-3.3.1.jar
hadoop-mapreduce-examples-3.3.1.jar # 这个就是我们需要调用的测试用例
jdiff
lib-examples
sources
bernard@aqua:~$ hadoop jar /opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar 
# 一些详情
An example program must be given as the first argument.
Valid program names are:
  aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
  aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
  bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.
  dbcount: An example job that count the pageview counts from a database.
  distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.
  grep: A map/reduce program that counts the matches of a regex in the input.
  join: A job that effects a join over sorted, equally partitioned datasets
  multifilewc: A job that counts words from several files.
  pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
  pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
  randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
  randomwriter: A map/reduce program that writes 10GB of random data per node.
  secondarysort: An example defining a secondary sort to the reduce.
  sort: A map/reduce program that sorts the data written by the random writer.
  sudoku: A sudoku solver.
  teragen: Generate data for the terasort
  terasort: Run the terasort
  teravalidate: Checking results of terasort
  wordcount: A map/reduce program that counts the words in the input files.
  wordmean: A map/reduce program that counts the average length of the words in the input files.
  wordmedian: A map/reduce program that counts the median length of the words in the input files.
  wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.

到test文件夹里,把hadoop的所有配置xml文件作为输入,复制到新建的input文件夹里

bernard@aqua:~/test$ mkdir ./input && cp /opt/software/hadoop-3.3.1/etc/hadoop/*.xml ./input

运行grep例子,将input文件中的所有文件作为输入,筛选符合表达式dfs[a-z.]+的单词并统计出现的次数,输出到output文件夹

bernard@aqua:~/test$ hadoop jar /opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep ./input ./output 'dfs[a-z.]+'
2021-08-13 15:58:48,390 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2021-08-13 15:58:48,772 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2021-08-13 15:58:48,772 INFO impl.MetricsSystemImpl: JobTracker metrics system started
2021-08-13 15:58:49,402 INFO input.FileInputFormat: Total input files to process : 10
2021-08-13 15:58:49,480 INFO mapreduce.JobSubmitter: number of splits:10
2021-08-13 15:58:50,303 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local2043931804_0001
2021-08-13 15:58:50,303 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-08-13 15:58:50,633 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
2021-08-13 15:58:50,635 INFO mapreduce.Job: Running job: job_local2043931804_0001
2021-08-13 15:58:50,640 INFO mapred.LocalJobRunner: OutputCommitter set in config null
2021-08-13 15:58:50,665 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:50,665 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:50,666 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2021-08-13 15:58:50,761 INFO mapred.LocalJobRunner: Waiting for map tasks
2021-08-13 15:58:50,761 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000000_0
2021-08-13 15:58:50,806 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:50,806 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:50,888 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:50,893 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/hadoop-policy.xml:0+11765
2021-08-13 15:58:50,996 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:50,996 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:50,996 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:50,996 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:50,997 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:51,005 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:51,058 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:51,058 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:51,059 INFO mapred.MapTask: Spilling map output
2021-08-13 15:58:51,059 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
2021-08-13 15:58:51,059 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
2021-08-13 15:58:51,096 INFO mapred.MapTask: Finished spill 0
2021-08-13 15:58:51,113 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000000_0 is done. And is in the process of committing
2021-08-13 15:58:51,134 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:51,134 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000000_0' done.
2021-08-13 15:58:51,166 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000000_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=294067
                FILE: Number of bytes written=918008
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=275
                Map output records=1
                Map output bytes=17
                Map output materialized bytes=25
                Input split bytes=112
                Combine input records=1
                Combine output records=1
                Spilled Records=1
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=206569472
        File Input Format Counters 
                Bytes Read=11765
2021-08-13 15:58:51,167 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000000_0
2021-08-13 15:58:51,168 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000001_0
2021-08-13 15:58:51,170 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:51,170 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:51,170 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:51,172 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/capacity-scheduler.xml:0+9213
2021-08-13 15:58:51,245 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:51,245 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:51,245 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:51,245 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:51,245 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:51,246 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:51,256 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:51,256 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:51,261 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000001_0 is done. And is in the process of committing
2021-08-13 15:58:51,263 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:51,264 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000001_0' done.
2021-08-13 15:58:51,265 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000001_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=304406
                FILE: Number of bytes written=918046
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=244
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=117
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=311951360
        File Input Format Counters 
                Bytes Read=9213
2021-08-13 15:58:51,265 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000001_0
2021-08-13 15:58:51,266 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000002_0
2021-08-13 15:58:51,267 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:51,268 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:51,268 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:51,270 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/kms-acls.xml:0+3518
2021-08-13 15:58:51,342 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:51,342 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:51,342 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:51,342 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:51,342 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:51,343 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:51,349 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:51,349 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:51,353 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000002_0 is done. And is in the process of committing
2021-08-13 15:58:51,356 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:51,356 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000002_0' done.
2021-08-13 15:58:51,357 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000002_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=309050
                FILE: Number of bytes written=918084
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=135
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=107
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=417333248
        File Input Format Counters 
                Bytes Read=3518
2021-08-13 15:58:51,357 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000002_0
2021-08-13 15:58:51,357 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000003_0
2021-08-13 15:58:51,360 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:51,360 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:51,360 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:51,362 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/hdfs-site.xml:0+775
2021-08-13 15:58:51,432 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:51,433 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:51,433 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:51,433 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:51,433 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:51,433 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:51,437 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:51,438 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:51,457 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000003_0 is done. And is in the process of committing
2021-08-13 15:58:51,459 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:51,459 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000003_0' done.
2021-08-13 15:58:51,459 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000003_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=310951
                FILE: Number of bytes written=918122
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=21
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=108
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=522715136
        File Input Format Counters 
                Bytes Read=775
2021-08-13 15:58:51,460 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000003_0
2021-08-13 15:58:51,460 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000004_0
2021-08-13 15:58:51,461 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:51,461 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:51,462 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:51,463 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/core-site.xml:0+774
2021-08-13 15:58:51,537 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:51,537 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:51,537 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:51,537 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:51,537 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:51,540 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:51,546 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:51,546 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:51,549 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000004_0 is done. And is in the process of committing
2021-08-13 15:58:51,551 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:51,551 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000004_0' done.
2021-08-13 15:58:51,551 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000004_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=312851
                FILE: Number of bytes written=918160
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=20
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=108
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=628097024
        File Input Format Counters 
                Bytes Read=774
2021-08-13 15:58:51,551 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000004_0
2021-08-13 15:58:51,551 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000005_0
2021-08-13 15:58:51,553 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:51,553 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:51,554 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:51,555 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/mapred-site.xml:0+758
2021-08-13 15:58:51,627 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:51,627 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:51,627 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:51,627 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:51,627 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:51,628 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:51,632 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:51,632 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:51,635 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000005_0 is done. And is in the process of committing
2021-08-13 15:58:51,637 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:51,637 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000005_0' done.
2021-08-13 15:58:51,638 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000005_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=314223
                FILE: Number of bytes written=918198
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=21
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=110
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=730333184
        File Input Format Counters 
                Bytes Read=758
2021-08-13 15:58:51,638 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000005_0
2021-08-13 15:58:51,638 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000006_0
2021-08-13 15:58:51,639 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:51,639 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:51,640 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:51,642 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/yarn-site.xml:0+690
2021-08-13 15:58:51,768 INFO mapreduce.Job: Job job_local2043931804_0001 running in uber mode : false
2021-08-13 15:58:51,821 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:51,821 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:51,821 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:51,822 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:51,822 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:51,823 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:51,824 INFO mapreduce.Job:  map 100% reduce 0%
2021-08-13 15:58:51,829 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:51,829 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:51,833 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000006_0 is done. And is in the process of committing
2021-08-13 15:58:51,835 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:51,836 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000006_0' done.
2021-08-13 15:58:51,836 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000006_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=315527
                FILE: Number of bytes written=918236
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=19
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=108
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=123
                Total committed heap usage (bytes)=262668288
        File Input Format Counters 
                Bytes Read=690
2021-08-13 15:58:51,837 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000006_0
2021-08-13 15:58:51,837 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000007_0
2021-08-13 15:58:51,838 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:51,839 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:51,839 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:51,841 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/hdfs-rbf-site.xml:0+683
2021-08-13 15:58:51,937 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:51,937 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:51,937 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:51,937 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:51,937 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:51,938 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:51,942 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:51,943 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:51,946 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000007_0 is done. And is in the process of committing
2021-08-13 15:58:51,948 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:51,948 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000007_0' done.
2021-08-13 15:58:51,949 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000007_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=316824
                FILE: Number of bytes written=918274
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=20
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=112
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=368050176
        File Input Format Counters 
                Bytes Read=683
2021-08-13 15:58:51,949 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000007_0
2021-08-13 15:58:51,949 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000008_0
2021-08-13 15:58:51,951 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:51,951 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:51,951 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:51,953 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/kms-site.xml:0+682
2021-08-13 15:58:52,026 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:52,026 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:52,026 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:52,026 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:52,026 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:52,027 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:52,030 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:52,030 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:52,033 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000008_0 is done. And is in the process of committing
2021-08-13 15:58:52,035 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:52,035 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000008_0' done.
2021-08-13 15:58:52,035 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000008_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=318120
                FILE: Number of bytes written=918312
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=20
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=107
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=473432064
        File Input Format Counters 
                Bytes Read=682
2021-08-13 15:58:52,036 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000008_0
2021-08-13 15:58:52,036 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_m_000009_0
2021-08-13 15:58:52,037 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:52,037 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:52,038 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:52,039 INFO mapred.MapTask: Processing split: file:/home/bernard/test/input/httpfs-site.xml:0+620
2021-08-13 15:58:52,113 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:52,113 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:52,113 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:52,113 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:52,113 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:52,114 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:52,117 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:52,117 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:52,120 INFO mapred.Task: Task:attempt_local2043931804_0001_m_000009_0 is done. And is in the process of committing
2021-08-13 15:58:52,121 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:52,121 INFO mapred.Task: Task 'attempt_local2043931804_0001_m_000009_0' done.
2021-08-13 15:58:52,122 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_m_000009_0: Counters: 18
        File System Counters
                FILE: Number of bytes read=319354
                FILE: Number of bytes written=918350
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=17
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=110
                Combine input records=0
                Combine output records=0
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=578813952
        File Input Format Counters 
                Bytes Read=620
2021-08-13 15:58:52,122 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_m_000009_0
2021-08-13 15:58:52,122 INFO mapred.LocalJobRunner: map task executor complete.
2021-08-13 15:58:52,126 INFO mapred.LocalJobRunner: Waiting for reduce tasks
2021-08-13 15:58:52,127 INFO mapred.LocalJobRunner: Starting task: attempt_local2043931804_0001_r_000000_0
2021-08-13 15:58:52,137 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:52,137 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:52,138 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:52,143 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@8dba303
2021-08-13 15:58:52,145 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
2021-08-13 15:58:52,241 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=626471744, maxSingleShuffleLimit=156617936, mergeThreshold=413471360, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2021-08-13 15:58:52,244 INFO reduce.EventFetcher: attempt_local2043931804_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2021-08-13 15:58:52,284 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000009_0 decomp: 2 len: 6 to MEMORY
2021-08-13 15:58:52,287 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local2043931804_0001_m_000009_0
2021-08-13 15:58:52,289 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->2
2021-08-13 15:58:52,323 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000006_0 decomp: 2 len: 6 to MEMORY
2021-08-13 15:58:52,324 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local2043931804_0001_m_000006_0
2021-08-13 15:58:52,324 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 2, commitMemory -> 2, usedMemory ->4
2021-08-13 15:58:52,326 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000003_0 decomp: 2 len: 6 to MEMORY
2021-08-13 15:58:52,327 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local2043931804_0001_m_000003_0
2021-08-13 15:58:52,327 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 3, commitMemory -> 4, usedMemory ->6
2021-08-13 15:58:52,329 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000000_0 decomp: 21 len: 25 to MEMORY
2021-08-13 15:58:52,331 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local2043931804_0001_m_000000_0
2021-08-13 15:58:52,331 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 4, commitMemory -> 6, usedMemory ->27
2021-08-13 15:58:52,332 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000005_0 decomp: 2 len: 6 to MEMORY
2021-08-13 15:58:52,333 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local2043931804_0001_m_000005_0
2021-08-13 15:58:52,333 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 5, commitMemory -> 27, usedMemory ->29
2021-08-13 15:58:52,335 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000002_0 decomp: 2 len: 6 to MEMORY
2021-08-13 15:58:52,336 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local2043931804_0001_m_000002_0
2021-08-13 15:58:52,336 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 6, commitMemory -> 29, usedMemory ->31
2021-08-13 15:58:52,337 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000001_0 decomp: 2 len: 6 to MEMORY
2021-08-13 15:58:52,338 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local2043931804_0001_m_000001_0
2021-08-13 15:58:52,338 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 7, commitMemory -> 31, usedMemory ->33
2021-08-13 15:58:52,339 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000008_0 decomp: 2 len: 6 to MEMORY
2021-08-13 15:58:52,340 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local2043931804_0001_m_000008_0
2021-08-13 15:58:52,340 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 8, commitMemory -> 33, usedMemory ->35
2021-08-13 15:58:52,342 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000007_0 decomp: 2 len: 6 to MEMORY
2021-08-13 15:58:52,342 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local2043931804_0001_m_000007_0
2021-08-13 15:58:52,343 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 9, commitMemory -> 35, usedMemory ->37
2021-08-13 15:58:52,344 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2043931804_0001_m_000004_0 decomp: 2 len: 6 to MEMORY
2021-08-13 15:58:52,344 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local2043931804_0001_m_000004_0
2021-08-13 15:58:52,344 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 10, commitMemory -> 37, usedMemory ->39
2021-08-13 15:58:52,345 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
2021-08-13 15:58:52,346 INFO mapred.LocalJobRunner: 10 / 10 copied.
2021-08-13 15:58:52,346 INFO reduce.MergeManagerImpl: finalMerge called with 10 in-memory map-outputs and 0 on-disk map-outputs
2021-08-13 15:58:52,396 INFO mapred.Merger: Merging 10 sorted segments
2021-08-13 15:58:52,397 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
2021-08-13 15:58:52,398 INFO reduce.MergeManagerImpl: Merged 10 segments, 39 bytes to disk to satisfy reduce memory limit
2021-08-13 15:58:52,398 INFO reduce.MergeManagerImpl: Merging 1 files, 25 bytes from disk
2021-08-13 15:58:52,399 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
2021-08-13 15:58:52,399 INFO mapred.Merger: Merging 1 sorted segments
2021-08-13 15:58:52,399 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
2021-08-13 15:58:52,400 INFO mapred.LocalJobRunner: 10 / 10 copied.
2021-08-13 15:58:52,469 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2021-08-13 15:58:52,471 INFO mapred.Task: Task:attempt_local2043931804_0001_r_000000_0 is done. And is in the process of committing
2021-08-13 15:58:52,471 INFO mapred.LocalJobRunner: 10 / 10 copied.
2021-08-13 15:58:52,471 INFO mapred.Task: Task attempt_local2043931804_0001_r_000000_0 is allowed to commit now
2021-08-13 15:58:52,473 INFO output.FileOutputCommitter: Saved output of task 'attempt_local2043931804_0001_r_000000_0' to file:/home/bernard/test/grep-temp-951326842
2021-08-13 15:58:52,474 INFO mapred.LocalJobRunner: reduce > reduce
2021-08-13 15:58:52,474 INFO mapred.Task: Task 'attempt_local2043931804_0001_r_000000_0' done.
2021-08-13 15:58:52,474 INFO mapred.Task: Final Counters for attempt_local2043931804_0001_r_000000_0: Counters: 24
        File System Counters
                FILE: Number of bytes read=319778
                FILE: Number of bytes written=918498
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Combine input records=0
                Combine output records=0
                Reduce input groups=1
                Reduce shuffle bytes=79
                Reduce input records=1
                Reduce output records=1
                Spilled Records=1
                Shuffled Maps =10
                Failed Shuffles=0
                Merged Map outputs=10
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=578813952
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Output Format Counters 
                Bytes Written=123
2021-08-13 15:58:52,475 INFO mapred.LocalJobRunner: Finishing task: attempt_local2043931804_0001_r_000000_0
2021-08-13 15:58:52,475 INFO mapred.LocalJobRunner: reduce task executor complete.
2021-08-13 15:58:52,828 INFO mapreduce.Job:  map 100% reduce 100%
2021-08-13 15:58:52,829 INFO mapreduce.Job: Job job_local2043931804_0001 completed successfully
2021-08-13 15:58:52,875 INFO mapreduce.Job: Counters: 30
        File System Counters
                FILE: Number of bytes read=3435151
                FILE: Number of bytes written=10100288
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=792
                Map output records=1
                Map output bytes=17
                Map output materialized bytes=79
                Input split bytes=1099
                Combine input records=1
                Combine output records=1
                Reduce input groups=1
                Reduce shuffle bytes=79
                Reduce input records=1
                Reduce output records=1
                Spilled Records=2
                Shuffled Maps =10
                Failed Shuffles=0
                Merged Map outputs=10
                GC time elapsed (ms)=123
                Total committed heap usage (bytes)=5078777856
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=29478
        File Output Format Counters 
                Bytes Written=123
2021-08-13 15:58:52,926 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
2021-08-13 15:58:52,947 INFO input.FileInputFormat: Total input files to process : 1
2021-08-13 15:58:52,951 INFO mapreduce.JobSubmitter: number of splits:1
2021-08-13 15:58:52,997 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local980663752_0002
2021-08-13 15:58:52,997 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-08-13 15:58:53,107 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
2021-08-13 15:58:53,108 INFO mapreduce.Job: Running job: job_local980663752_0002
2021-08-13 15:58:53,108 INFO mapred.LocalJobRunner: OutputCommitter set in config null
2021-08-13 15:58:53,109 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:53,109 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:53,109 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2021-08-13 15:58:53,113 INFO mapred.LocalJobRunner: Waiting for map tasks
2021-08-13 15:58:53,113 INFO mapred.LocalJobRunner: Starting task: attempt_local980663752_0002_m_000000_0
2021-08-13 15:58:53,115 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:53,115 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:53,116 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:53,118 INFO mapred.MapTask: Processing split: file:/home/bernard/test/grep-temp-951326842/part-r-00000:0+111
2021-08-13 15:58:53,190 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2021-08-13 15:58:53,190 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
2021-08-13 15:58:53,190 INFO mapred.MapTask: soft limit at 83886080
2021-08-13 15:58:53,191 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
2021-08-13 15:58:53,191 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
2021-08-13 15:58:53,192 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-08-13 15:58:53,299 INFO mapred.LocalJobRunner: 
2021-08-13 15:58:53,299 INFO mapred.MapTask: Starting flush of map output
2021-08-13 15:58:53,299 INFO mapred.MapTask: Spilling map output
2021-08-13 15:58:53,299 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
2021-08-13 15:58:53,299 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
2021-08-13 15:58:53,300 INFO mapred.MapTask: Finished spill 0
2021-08-13 15:58:53,302 INFO mapred.Task: Task:attempt_local980663752_0002_m_000000_0 is done. And is in the process of committing
2021-08-13 15:58:53,303 INFO mapred.LocalJobRunner: map
2021-08-13 15:58:53,303 INFO mapred.Task: Task 'attempt_local980663752_0002_m_000000_0' done.
2021-08-13 15:58:53,303 INFO mapred.Task: Final Counters for attempt_local980663752_0002_m_000000_0: Counters: 17
        File System Counters
                FILE: Number of bytes read=601065
                FILE: Number of bytes written=1831004
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=1
                Map output records=1
                Map output bytes=17
                Map output materialized bytes=25
                Input split bytes=121
                Combine input records=0
                Spilled Records=1
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=684195840
        File Input Format Counters 
                Bytes Read=123
2021-08-13 15:58:53,304 INFO mapred.LocalJobRunner: Finishing task: attempt_local980663752_0002_m_000000_0
2021-08-13 15:58:53,304 INFO mapred.LocalJobRunner: map task executor complete.
2021-08-13 15:58:53,305 INFO mapred.LocalJobRunner: Waiting for reduce tasks
2021-08-13 15:58:53,305 INFO mapred.LocalJobRunner: Starting task: attempt_local980663752_0002_r_000000_0
2021-08-13 15:58:53,307 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-13 15:58:53,307 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-13 15:58:53,307 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-13 15:58:53,307 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@76c49e80
2021-08-13 15:58:53,307 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
2021-08-13 15:58:53,309 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=626471744, maxSingleShuffleLimit=156617936, mergeThreshold=413471360, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2021-08-13 15:58:53,310 INFO reduce.EventFetcher: attempt_local980663752_0002_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2021-08-13 15:58:53,311 INFO reduce.LocalFetcher: localfetcher#2 about to shuffle output of map attempt_local980663752_0002_m_000000_0 decomp: 21 len: 25 to MEMORY
2021-08-13 15:58:53,312 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local980663752_0002_m_000000_0
2021-08-13 15:58:53,312 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->21
2021-08-13 15:58:53,314 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
2021-08-13 15:58:53,314 INFO mapred.LocalJobRunner: 1 / 1 copied.
2021-08-13 15:58:53,315 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2021-08-13 15:58:53,316 INFO mapred.Merger: Merging 1 sorted segments
2021-08-13 15:58:53,316 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 11 bytes
2021-08-13 15:58:53,317 INFO reduce.MergeManagerImpl: Merged 1 segments, 21 bytes to disk to satisfy reduce memory limit
2021-08-13 15:58:53,318 INFO reduce.MergeManagerImpl: Merging 1 files, 25 bytes from disk
2021-08-13 15:58:53,318 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
2021-08-13 15:58:53,318 INFO mapred.Merger: Merging 1 sorted segments
2021-08-13 15:58:53,318 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 11 bytes
2021-08-13 15:58:53,318 INFO mapred.LocalJobRunner: 1 / 1 copied.
2021-08-13 15:58:53,321 INFO mapred.Task: Task:attempt_local980663752_0002_r_000000_0 is done. And is in the process of committing
2021-08-13 15:58:53,322 INFO mapred.LocalJobRunner: 1 / 1 copied.
2021-08-13 15:58:53,322 INFO mapred.Task: Task attempt_local980663752_0002_r_000000_0 is allowed to commit now
2021-08-13 15:58:53,323 INFO output.FileOutputCommitter: Saved output of task 'attempt_local980663752_0002_r_000000_0' to file:/home/bernard/test/output
2021-08-13 15:58:53,326 INFO mapred.LocalJobRunner: reduce > reduce
2021-08-13 15:58:53,326 INFO mapred.Task: Task 'attempt_local980663752_0002_r_000000_0' done.
2021-08-13 15:58:53,326 INFO mapred.Task: Final Counters for attempt_local980663752_0002_r_000000_0: Counters: 24
        File System Counters
                FILE: Number of bytes read=601147
                FILE: Number of bytes written=1831052
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Combine input records=0
                Combine output records=0
                Reduce input groups=1
                Reduce shuffle bytes=25
                Reduce input records=1
                Reduce output records=1
                Spilled Records=1
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=684195840
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Output Format Counters 
                Bytes Written=23
2021-08-13 15:58:53,326 INFO mapred.LocalJobRunner: Finishing task: attempt_local980663752_0002_r_000000_0
2021-08-13 15:58:53,327 INFO mapred.LocalJobRunner: reduce task executor complete.
2021-08-13 15:58:54,109 INFO mapreduce.Job: Job job_local980663752_0002 running in uber mode : false
2021-08-13 15:58:54,109 INFO mapreduce.Job:  map 100% reduce 100%
2021-08-13 15:58:54,110 INFO mapreduce.Job: Job job_local980663752_0002 completed successfully
2021-08-13 15:58:54,113 INFO mapreduce.Job: Counters: 30
        File System Counters
                FILE: Number of bytes read=1202212
                FILE: Number of bytes written=3662056
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=1
                Map output records=1
                Map output bytes=17
                Map output materialized bytes=25
                Input split bytes=121
                Combine input records=0
                Combine output records=0
                Reduce input groups=1
                Reduce shuffle bytes=25
                Reduce input records=1
                Reduce output records=1
                Spilled Records=2
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=1368391680
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=123
        File Output Format Counters 
                Bytes Written=23
bernard@aqua:~/test$ cat ./output/*  
1       dfsadmin
bernard@aqua:~/test$ ls ./output/
part-r-00000  _SUCCESS

伪分布式配置

summary

Hadoop 可以在单节点上以伪分布式的方式运行,Hadoop 进程以分离的 Java 进程来运行,节点既作为 NameNode 也作为 DataNode,同时,读取的是 HDFS 中的文件。

Hadoop 的配置文件位于/usr/local/hadoop/etc/hadoop/ 中,伪分布式需要修改2个配置文件 core-site.xmlhdfs-site.xml 。Hadoop的配置文件是 xml 格式,每个配置以声明 property 的 name 和 value 的方式来实现。

core-site.xml

<configuration>
</configuration>

替换为

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
</configuration>

替换为

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

Hadoop配置文件说明

Hadoop 的运行方式是由配置文件决定的(运行 Hadoop 时会读取配置文件),因此如果需要从伪分布式模式切换回非分布式模式,需要删除core-site.xml 中的配置项。

此外,伪分布式虽然只需要配置 fs.defaultFS 和 dfs.replication 就可以运行(官方教程如此),不过若没有配置hadoop.tmp.dir 参数,则默认使用的临时目录为/tmp/hadoo-hadoop,而这个目录在重启时有可能被系统清理掉,导致必须重新执行 format才行。所以我们进行了设置,同时也指定dfs.namenode.name.dir 和 dfs.datanode.data.dir,否则在接下来的步骤中可能会出错。

bernard@aqua:/opt/software/hadoop-3.3.1$ ./sbin/start-dfs.sh

detail

bernard@aqua:/opt/software/hadoop-3.3.1$ sudo useradd -m hadoop -s /bin/bash
[sudo] password for bernard: 
bernard@aqua:/opt/software/hadoop-3.3.1$ su hadoop
Password: 
bernard@aqua:/opt/software/hadoop-3.3.1$ su hadoop passwd
Password: 
su: Authentication failure
bernard@aqua:/opt/software/hadoop-3.3.1$ su root
Password: 
root@aqua:/opt/software/hadoop-3.3.1# passwd hadoop
New password: 
Retype new password: 
Sorry, passwords do not match.
passwd: Authentication token manipulation error
passwd: password unchanged
root@aqua:/opt/software/hadoop-3.3.1# su hadoop
hadoop@aqua:/opt/software/hadoop-3.3.1$
bernard@aqua:~/test$ cd /opt/software/hadoop-3.3.1/
bernard@aqua:/opt/software/hadoop-3.3.1$ ls ./etc/hadoop/
capacity-scheduler.xml            kms-log4j.properties
configuration.xsl                 kms-site.xml
container-executor.cfg            log4j.properties
core-site.xml                     mapred-env.cmd
hadoop-env.cmd                    mapred-env.sh
hadoop-env.sh                     mapred-queues.xml.template
hadoop-metrics2.properties        mapred-site.xml
hadoop-policy.xml                 shellprofile.d
hadoop-user-functions.sh.example  ssl-client.xml.example
hdfs-rbf-site.xml                 ssl-server.xml.example
hdfs-site.xml                     user_ec_policies.xml.template
httpfs-env.sh                     workers
httpfs-log4j.properties           yarn-env.cmd
httpfs-site.xml                   yarn-env.sh
kms-acls.xml                      yarnservice-log4j.properties
kms-env.sh                        yarn-site.xml
bernard@aqua:/opt/software/hadoop-3.3.1$ sudo vi ./etc/hadoop/core-site.xml
bernard@aqua:/opt/software/hadoop-3.3.1$ sudo vi ./etc/hadoop/hdfs-site.xml
bernard@aqua:/opt/software/hadoop-3.3.1$ hdfs namenode -format
2021-08-13 16:42:41,885 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = aqua/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.3.1
STARTUP_MSG:   classpath = /opt/software/hadoop-3.3.1/etc/hadoop:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/gson-2.2.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jetty-http-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jetty-security-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/hadoop-shaded-guava-1.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/failureaccess-1.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/json-smart-2.4.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/curator-recipes-4.2.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/hadoop-annotations-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/netty-3.10.6.Final.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jul-to-slf4j-1.7.30.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/paranamer-2.3.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/avro-1.7.7.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/asm-5.0.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jetty-io-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jetty-xml-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jetty-webapp-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/nimbus-jose-jwt-9.8.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/zookeeper-jute-3.5.6.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jettison-1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/curator-client-4.2.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/dnsjava-2.1.7.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/stax2-api-4.2.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jackson-databind-2.10.5.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-lang3-3.7.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jetty-servlet-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jetty-util-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-compress-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/zookeeper-3.5.6.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/re2j-1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/httpcore-4.4.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-net-3.6.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/accessors-smart-2.4.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jetty-server-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-io-2.8.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jackson-annotations-2.10.5.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/slf4j-api-1.7.30.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jackson-core-2.10.5.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/curator-framework-4.2.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-text-1.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/hadoop-auth-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jetty-util-ajax-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/snappy-java-1.1.8.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/hadoop-kms-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1-tests.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/hadoop-registry-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/common/hadoop-nfs-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-http-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-security-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/json-smart-2.4.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-recipes-4.2.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-annotations-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-io-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-xml-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-webapp-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.8.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/zookeeper-jute-3.5.6.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-client-4.2.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/netty-all-4.1.61.Final.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-databind-2.10.5.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-servlet-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-util-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-compress-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/woodstox-core-5.3.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/zookeeper-3.5.6.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/accessors-smart-2.4.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-server-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-io-2.8.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-annotations-2.10.5.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-core-2.10.5.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/curator-framework-4.2.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-text-1.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/hadoop-auth-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/snappy-java-1.1.8.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-client-3.3.1-tests.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.1-tests.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-nfs-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-client-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.1-tests.jar:/opt/software/hadoop-3.3.1/share/hadoop/hdfs/hadoop-hdfs-3.3.1-tests.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1-tests.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/snakeyaml-1.26.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-server-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/fst-2.50.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-api-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.10.5.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/bcpkix-jdk15on-1.60.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/asm-tree-9.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/asm-analysis-9.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jline-3.9.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-client-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-plus-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-jndi-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-client-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/objenesis-2.6.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/guice-4.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-servlet-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/websocket-common-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.10.5.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jna-5.2.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/bcprov-jdk15on-1.60.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jetty-annotations-9.4.40.v20210413.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/asm-commons-9.0.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/lib/jackson-jaxrs-base-2.10.5.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-services-core-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-client-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-api-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-services-api-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-common-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-common-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-registry-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-tests-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.3.1.jar:/opt/software/hadoop-3.3.1/share/hadoop/yarn/hadoop-yarn-server-router-3.3.1.jar
STARTUP_MSG:   build = https://github.com/apache/hadoop.git -r a3b9c37a397ad4188041dd80621bdeefc46885f2; compiled by 'ubuntu' on 2021-06-15T05:13Z
STARTUP_MSG:   java = 1.8.0_292
************************************************************/
2021-08-13 16:42:41,912 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-08-13 16:42:42,056 INFO namenode.NameNode: createNameNode [-format]
2021-08-13 16:42:43,314 INFO namenode.NameNode: Formatting using clusterid: CID-5c2c2d28-c9db-4c7e-9e47-f57432c23f98
2021-08-13 16:42:43,439 INFO namenode.FSEditLog: Edit logging is async:true
2021-08-13 16:42:43,628 INFO namenode.FSNamesystem: KeyProvider: null
2021-08-13 16:42:43,631 INFO namenode.FSNamesystem: fsLock is fair: true
2021-08-13 16:42:43,632 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2021-08-13 16:42:43,640 INFO namenode.FSNamesystem: fsOwner                = bernard (auth:SIMPLE)
2021-08-13 16:42:43,640 INFO namenode.FSNamesystem: supergroup             = supergroup
2021-08-13 16:42:43,640 INFO namenode.FSNamesystem: isPermissionEnabled    = true
2021-08-13 16:42:43,640 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true
2021-08-13 16:42:43,640 INFO namenode.FSNamesystem: HA Enabled: false
2021-08-13 16:42:43,777 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2021-08-13 16:42:43,832 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2021-08-13 16:42:43,832 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2021-08-13 16:42:43,839 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2021-08-13 16:42:43,839 INFO blockmanagement.BlockManager: The block deletion will start around 2021 八月 13 16:42:43
2021-08-13 16:42:43,842 INFO util.GSet: Computing capacity for map BlocksMap
2021-08-13 16:42:43,842 INFO util.GSet: VM type       = 64-bit
2021-08-13 16:42:43,845 INFO util.GSet: 2.0% max memory 853.5 MB = 17.1 MB
2021-08-13 16:42:43,845 INFO util.GSet: capacity      = 2^21 = 2097152 entries
2021-08-13 16:42:43,859 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
2021-08-13 16:42:43,859 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2021-08-13 16:42:43,874 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999
2021-08-13 16:42:43,874 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2021-08-13 16:42:43,874 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2021-08-13 16:42:43,875 INFO blockmanagement.BlockManager: defaultReplication         = 1
2021-08-13 16:42:43,875 INFO blockmanagement.BlockManager: maxReplication             = 512
2021-08-13 16:42:43,876 INFO blockmanagement.BlockManager: minReplication             = 1
2021-08-13 16:42:43,876 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2021-08-13 16:42:43,876 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2021-08-13 16:42:43,876 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2021-08-13 16:42:43,876 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2021-08-13 16:42:43,971 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
2021-08-13 16:42:43,971 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
2021-08-13 16:42:43,971 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
2021-08-13 16:42:43,971 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
2021-08-13 16:42:43,995 INFO util.GSet: Computing capacity for map INodeMap
2021-08-13 16:42:43,995 INFO util.GSet: VM type       = 64-bit
2021-08-13 16:42:43,996 INFO util.GSet: 1.0% max memory 853.5 MB = 8.5 MB
2021-08-13 16:42:43,996 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2021-08-13 16:42:43,996 INFO namenode.FSDirectory: ACLs enabled? true
2021-08-13 16:42:43,996 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2021-08-13 16:42:43,997 INFO namenode.FSDirectory: XAttrs enabled? true
2021-08-13 16:42:43,997 INFO namenode.NameNode: Caching file names occurring more than 10 times
2021-08-13 16:42:44,005 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2021-08-13 16:42:44,007 INFO snapshot.SnapshotManager: SkipList is disabled
2021-08-13 16:42:44,013 INFO util.GSet: Computing capacity for map cachedBlocks
2021-08-13 16:42:44,014 INFO util.GSet: VM type       = 64-bit
2021-08-13 16:42:44,014 INFO util.GSet: 0.25% max memory 853.5 MB = 2.1 MB
2021-08-13 16:42:44,014 INFO util.GSet: capacity      = 2^18 = 262144 entries
2021-08-13 16:42:44,046 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2021-08-13 16:42:44,047 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2021-08-13 16:42:44,047 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2021-08-13 16:42:44,053 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2021-08-13 16:42:44,053 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2021-08-13 16:42:44,076 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2021-08-13 16:42:44,076 INFO util.GSet: VM type       = 64-bit
2021-08-13 16:42:44,076 INFO util.GSet: 0.029999999329447746% max memory 853.5 MB = 262.2 KB
2021-08-13 16:42:44,076 INFO util.GSet: capacity      = 2^15 = 32768 entries
2021-08-13 16:42:44,231 INFO namenode.FSImage: Allocated new BlockPoolId: BP-771336332-127.0.1.1-1628844164183
2021-08-13 16:42:44,232 WARN namenode.NameNode: Encountered exception during format
java.io.IOException: Cannot create directory /usr/local/hadoop/tmp/dfs/name/current
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:447)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:189)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1276)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1724)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1832)
2021-08-13 16:42:44,354 INFO namenode.FSNamesystem: Stopping services started for active state
2021-08-13 16:42:44,355 INFO namenode.FSNamesystem: Stopping services started for standby state
2021-08-13 16:42:44,355 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /usr/local/hadoop/tmp/dfs/name/current
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:447)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:189)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1276)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1724)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1832)
2021-08-13 16:42:44,358 INFO util.ExitUtil: Exiting with status 1: java.io.IOException: Cannot create directory /usr/local/hadoop/tmp/dfs/name/current
2021-08-13 16:42:44,361 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at aqua/127.0.1.1
************************************************************/
bernard@aqua:/opt/software/hadoop-3.3.1$ ./sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
localhost: bernard@localhost: Permission denied (publickey,password).
Starting datanodes
localhost: bernard@localhost: Permission denied (publickey,password).
Starting secondary namenodes [aqua]
aqua: Warning: Permanently added 'aqua' (ECDSA) to the list of known hosts.
aqua: bernard@aqua: Permission denied (publickey,password).

reference

http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
http://dblab.xmu.edu.cn/blog/2441-2/#more-2441

  大数据 最新文章
实现Kafka至少消费一次
亚马逊云科技:还在苦于ETL?Zero ETL的时代
初探MapReduce
【SpringBoot框架篇】32.基于注解+redis实现
Elasticsearch:如何减少 Elasticsearch 集
Go redis操作
Redis面试题
专题五 Redis高并发场景
基于GBase8s和Calcite的多数据源查询
Redis——底层数据结构原理
上一篇文章      下一篇文章      查看所有文章
加:2021-08-14 14:08:30  更:2021-08-14 14:09:53 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/23 9:27:38-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码