The Hadoop MiniCluster is a lightweight, single-node Hadoop cluster that is primarily used for testing and development purposes. It provides a way to simulate a distributed Hadoop environment on a single machine, allowing developers to experiment with and test their Hadoop applications without the need for a full-scale, multi-node cluster.
Quote from the official Hadoop documentation:
Using the CLI MiniCluster, users can simply start and stop a single-node Hadoop cluster with a single command, and without the need to set any environment variables or manage configuration files. The CLI MiniCluster starts both a YARN/MapReduce & HDFS clusters.
In this notebook we download the Hadoop core and guide you through the steps required to launch the MiniCluster.
# @title
from IPython.core.display import HTML
HTML("""
<div style="background-color:rgb(16, 163, 127,.2);border:2px solid rgb(16, 163, 127,.3);padding:3px;">
<svg fill="none" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 320" style="width:32px;height:32px;">
<g fill="currentColor">
<path d="m297.06 130.97c7.26-21.79 4.76-45.66-6.85-65.48-17.46-30.4-52.56-46.04-86.84-38.68-15.25-17.18-37.16-26.95-60.13-26.81-35.04-.08-66.13 22.48-76.91 55.82-22.51 4.61-41.94 18.7-53.31 38.67-17.59 30.32-13.58 68.54 9.92 94.54-7.26 21.79-4.76 45.66 6.85 65.48 17.46 30.4 52.56 46.04 86.84 38.68 15.24 17.18 37.16 26.95 60.13 26.8 35.06.09 66.16-22.49 76.94-55.86 22.51-4.61 41.94-18.7 53.31-38.67 17.57-30.32 13.55-68.51-9.94-94.51zm-120.28 168.11c-14.03.02-27.62-4.89-38.39-13.88.49-.26 1.34-.73 1.89-1.07l63.72-36.8c3.26-1.85 5.26-5.32 5.24-9.07v-89.83l26.93 15.55c.29.14.48.42.52.74v74.39c-.04 33.08-26.83 59.9-59.91 59.97zm-128.84-55.03c-7.03-12.14-9.56-26.37-7.15-40.18.47.28 1.3.79 1.89 1.13l63.72 36.8c3.23 1.89 7.23 1.89 10.47 0l77.79-44.92v31.1c.02.32-.13.63-.38.83l-64.41 37.19c-28.69 16.52-65.33 6.7-81.92-21.95zm-16.77-139.09c7-12.16 18.05-21.46 31.21-26.29 0 .55-.03 1.52-.03 2.2v73.61c-.02 3.74 1.98 7.21 5.23 9.06l77.79 44.91-26.93 15.55c-.27.18-.61.21-.91.08l-64.42-37.22c-28.63-16.58-38.45-53.21-21.95-81.89zm221.26 51.49-77.79-44.92 26.93-15.54c.27-.18.61-.21.91-.08l64.42 37.19c28.68 16.57 38.51 53.26 21.94 81.94-7.01 12.14-18.05 21.44-31.2 26.28v-75.81c.03-3.74-1.96-7.2-5.2-9.06zm26.8-40.34c-.47-.29-1.3-.79-1.89-1.13l-63.72-36.8c-3.23-1.89-7.23-1.89-10.47 0l-77.79 44.92v-31.1c-.02-.32.13-.63.38-.83l64.41-37.16c28.69-16.55 65.37-6.7 81.91 22 6.99 12.12 9.52 26.31 7.15 40.1zm-168.51 55.43-26.94-15.55c-.29-.14-.48-.42-.52-.74v-74.39c.02-33.12 26.89-59.96 60.01-59.94 14.01 0 27.57 4.92 38.34 13.88-.49.26-1.33.73-1.89 1.07l-63.72 36.8c-3.26 1.85-5.26 5.31-5.24 9.06l-.04 89.79zm14.63-31.54 34.65-20.01 34.65 20v40.01l-34.65 20-34.65-20z"></path>
</svg>
<b>Note:</b>
While the MiniCluster is useful for many development and testing scenarios, it's important to note that it does not fully replicate the complexities and challenges of a true multi-node Hadoop cluster. For production-scale testing or performance evaluations, a larger, more representative cluster setup is recommended.
</div>
""")
This tutorial dives deep to help you really get the hang of things, explaining every step in Big Data processing, even if it takes a bit of time, so just hang in there and be patient!
import urllib.request
import os
import shutil
import tarfile
import logging
import subprocess
import time
import sys
############
# COSTANTS #
############
# URL for downloading Hadoop (archive site https://archive.apache.org/dist/)
HADOOP_URL = "https://archive.apache.org/dist/hadoop/core/hadoop-3.4.0/hadoop-3.4.0.tar.gz"
# logging level (should be one of: DEBUG, INFO, WARNING, ERROR, CRITICAL)
LOGGING_LEVEL = "INFO" #@param ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
# setup logging
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging_level = getattr(logging, LOGGING_LEVEL.upper(), 10)
logging.basicConfig(level=logging_level, \
format='%(asctime)s - %(levelname)s: %(message)s', \
datefmt='%d-%b-%y %I:%M:%S %p')
logger = logging.getLogger('my_logger')
JAVA_PATH = '/usr/lib/jvm/java-11-openjdk-amd64'
# true if running on Google Colab
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
from google.colab import output
# setup logging
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging_level = getattr(logging, LOGGING_LEVEL.upper(), 10)
logging.basicConfig(level=logging_level, \
format='%(asctime)s - %(levelname)s: %(message)s', \
datefmt='%d-%b-%y %I:%M:%S %p')
logger = logging.getLogger('my_logger')
# set variable JAVA_HOME (install Java if necessary)
def is_java_installed():
os.environ['JAVA_HOME'] = os.path.realpath(shutil.which("java")).split('/bin')[0]
return os.environ['JAVA_HOME']
def install_java():
# Uncomment and modify the desired version
# java_version= 'openjdk-11-jre-headless'
# java_version= 'default-jre'
# java_version= 'openjdk-17-jre-headless'
# java_version= 'openjdk-18-jre-headless'
java_version= 'openjdk-19-jre-headless'
print(f"Java not found. Installing {java_version} ... (this might take a while)")
try:
cmd = f"apt install -y {java_version}"
subprocess_output = subprocess.run(cmd, shell=True, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
stdout_result = subprocess_output.stdout
# Process the results as needed
logger.info("Done installing Java {}".format(java_version))
os.environ['JAVA_HOME'] = os.path.realpath(shutil.which("java")).split('/bin')[0]
logger.info("JAVA_HOME is {}".format(os.environ['JAVA_HOME']))
except subprocess.CalledProcessError as e:
# Handle the error if the command returns a non-zero exit code
logger.warn("Command failed with return code {}".format(e.returncode))
logger.warn("stdout: {}".format(e.stdout))
# Install Java if not available
if is_java_installed():
logger.info("Java is already installed: {}".format(os.environ['JAVA_HOME']))
else:
logger.info("Installing Java")
install_java()
# download Hadoop
file_name = os.path.basename(HADOOP_URL)
if os.path.isfile(file_name):
logger.info("{} already exists, not downloading".format(file_name))
else:
logger.info("Downloading {}".format(file_name))
urllib.request.urlretrieve(HADOOP_URL, file_name)
# uncompress archive
dir_name = file_name[:-7]
if os.path.exists(dir_name):
logger.info("{} is already uncompressed".format(file_name))
else:
logger.info("Uncompressing {}".format(file_name))
tar = tarfile.open(file_name)
tar.extractall()
tar.close()
# environment variables
os.environ['HADOOP_HOME'] = os.path.join(os.getcwd(), dir_name)
logger.info("HADOOP_HOME is {}".format(os.environ['HADOOP_HOME']))
os.environ['PATH'] = ':'.join([os.path.join(os.environ['HADOOP_HOME'], 'bin'), os.environ['PATH']])
logger.info("PATH is {}".format(os.environ['PATH']))
04-Aug-24 07:09:23 PM - INFO: Java is already installed: /usr/lib/jvm/java-11-openjdk-amd64 04-Aug-24 07:09:23 PM - INFO: Downloading hadoop-3.4.0.tar.gz 04-Aug-24 07:10:00 PM - INFO: Uncompressing hadoop-3.4.0.tar.gz 04-Aug-24 07:10:26 PM - INFO: HADOOP_HOME is /content/hadoop-3.4.0 04-Aug-24 07:10:26 PM - INFO: PATH is /content/hadoop-3.4.0/bin:/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin
mapred
¶The following steps are not needed but they might be useful to get familiar the mapred
command.
mapred minicluster
is the command that we are going to use to start the MiniCluster once a few variables and libraries are taken care of.
!mapred -h
Usage: mapred [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS] or mapred [OPTIONS] CLASSNAME [CLASSNAME OPTIONS] where CLASSNAME is a user-provided Java class OPTIONS is none or any of: --config dir Hadoop config directory --debug turn on shell script debug mode --help usage information SUBCOMMAND is one of: Admin Commands: frameworkuploader mapreduce framework upload hsadmin job history server admin interface Client Commands: classpath prints the class path needed for running mapreduce subcommands envvars display computed Hadoop environment variables job manipulate MapReduce jobs minicluster CLI MiniCluster pipes run a Pipes job queue get information regarding JobQueues sampler sampler version print the version Daemon Commands: historyserver run job history servers as a standalone daemon SUBCOMMAND may print help when invoked w/o parameters or with -h.
!mapred envvars
JAVA_HOME='/usr/lib/jvm/java-11-openjdk-amd64' HADOOP_MAPRED_HOME='/content/hadoop-3.4.0' MAPRED_DIR='share/hadoop/mapreduce' MAPRED_LIB_JARS_DIR='share/hadoop/mapreduce/lib' HADOOP_CONF_DIR='/content/hadoop-3.4.0/etc/hadoop' HADOOP_TOOLS_HOME='/content/hadoop-3.4.0' HADOOP_TOOLS_DIR='share/hadoop/tools' HADOOP_TOOLS_LIB_JARS_DIR='share/hadoop/tools/lib'
HADOOP_TOOLS_LIB_JARS_DIR
¶This variable needs to point to the folder containing Hadoop libraries. As you can see in the output of mapred envvars
, by default this is set incorrectly to share/hadoop/tools/lib
.
os.environ['HADOOP_TOOLS_LIB_JARS_DIR'] = os.path.join(os.environ['HADOOP_HOME'], 'share/hadoop/tools/lib/') #IMPORTANT
mockito
library¶To figure out which version of mockito
is compatible with the current version of Hadoop, check this page for library dependency analysis: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/dependency-analysis.html
We need Mockito 2.28.2.
After a new Hadoop installation, the mockito library is not yet there!
!find hadoop-3.4.0 -name "mockito*"
Download it from the Maven repository.
!wget --no-clobber https://repo1.maven.org/maven2/org/mockito/mockito-core/2.28.2/mockito-core-2.28.2.jar
--2024-08-04 19:10:27-- https://repo1.maven.org/maven2/org/mockito/mockito-core/2.28.2/mockito-core-2.28.2.jar Resolving repo1.maven.org (repo1.maven.org)... 199.232.192.209, 199.232.196.209, 2a04:4e42:4c::209, ... Connecting to repo1.maven.org (repo1.maven.org)|199.232.192.209|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 591179 (577K) [application/java-archive] Saving to: ‘mockito-core-2.28.2.jar’ mockito-core-2.28.2 100%[===================>] 577.32K --.-KB/s in 0.05s 2024-08-04 19:10:27 (10.3 MB/s) - ‘mockito-core-2.28.2.jar’ saved [591179/591179]
Install the library in a location where it can be found.
shutil.copy('mockito-core-2.28.2.jar', os.path.join(os.environ['HADOOP_HOME'],'share/hadoop/mapreduce/'))
'/content/hadoop-3.4.0/share/hadoop/mapreduce/mockito-core-2.28.2.jar'
os.listdir(os.path.join(os.environ['HADOOP_HOME'],'share/hadoop/mapreduce/'))
['hadoop-mapreduce-client-core-3.4.0.jar', 'sources', 'hadoop-mapreduce-client-uploader-3.4.0.jar', 'hadoop-mapreduce-client-jobclient-3.4.0.jar', 'hadoop-mapreduce-client-shuffle-3.4.0.jar', 'hadoop-mapreduce-client-common-3.4.0.jar', 'mockito-core-2.28.2.jar', 'hadoop-mapreduce-client-hs-3.4.0.jar', 'hadoop-mapreduce-client-app-3.4.0.jar', 'hadoop-mapreduce-client-jobclient-3.4.0-tests.jar', 'hadoop-mapreduce-examples-3.4.0.jar', 'hadoop-mapreduce-client-nativetask-3.4.0.jar', 'hadoop-mapreduce-client-hs-plugins-3.4.0.jar', 'jdiff']
These folders are needed for the correct functioning of the MiniCluster.
!mkdir -p ./target/test/data/dfs/{name-0-1,name-0-2}
!ls ./target/test/data/dfs/
name-0-1 name-0-2
To see a full list of options run mapred minicluster -help
.
!mapred minicluster -help
usage: ... -D <property=value> Options to pass into configuration object -datanodes <arg> How many datanodes to start (default 1) -format Format the DFS (default false) -help Prints option help. -jhsport <arg> JobHistoryServer port (default 0--we choose) -namenode <arg> URL of the namenode (default is either the DFS cluster or a temporary dir) -nnhttpport <arg> NameNode HTTP port (default 0--we choose) -nnport <arg> NameNode port (default 0--we choose) -nodemanagers <arg> How many nodemanagers to start (default 1) -nodfs Don't start a mini DFS cluster -nomr Don't start a mini MR cluster -rmport <arg> ResourceManager port (default 0--we choose) -writeConfig <path> Save configuration to this XML file. -writeDetails Write basic information to this JSON file.
If you are not running this notebook for the first time or have edited the core-site.xml
file you should now empty it to get the default initial configuration.
Note: the file core-site.xml
needs to exist and contain the lines
<configuration>
</configuration>
# check if the file is there
!find $HADOOP_HOME -name "core-site.xml"
/content/hadoop-3.4.0/etc/hadoop/core-site.xml
# view the contents of the file
!cat $(find $HADOOP_HOME -name "core-site.xml")
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> </configuration>
with open(os.environ['HADOOP_HOME']+'/etc/hadoop/core-site.xml', 'w') as file:
file.write("<configuration>\n</configuration>")
!cat $(find $HADOOP_HOME -name "core-site.xml")
<configuration> </configuration>
mapred minicluster -format
¶Finally, we are all set up to start the MiniCluster.
Make sure to include the -format
option to initialize and format the filesystem.
Other than that, we are using the defaults for all the other options.
Note that this process runs forever thus blocking the notebook. In order to proceed with the rest of the notebook, just interrupt the running cell.
We'll see later how to run the MiniCluster as a subprocess without blocking the notebook's cells execution.
Uncomment the next cell to launch the MiniCluster!
#!mapred minicluster -format
If the MiniMRCluster started correctly, you should see a line like this at the bottom:
2024-01-14 13:53:15,112 INFO mapreduce.MiniHadoopClusterManager: Started MiniMRCluster
To continue to work with this notebook, you need to stop the MiniCluster by terminating the execution of the previous cell.
It is convenient to start the MiniCluster as a subprocess in order prevent it from blocking the execution of other notebook cells.
The MiniCluster is a Java process with multiple listening ports.
lsof
to show listening ports¶To check ports that have listening services use lsof
.
The output should look like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node 7 root 21u IPv6 19666 0t0 TCP *:8080 (LISTEN)
kernel_manager_ 20 root 3u IPv4 18322 0t0 TCP 172.28.0.12:6000 (LISTEN)
colab-fileshim. 61 root 3u IPv4 19763 0t0 TCP 127.0.0.1:3453 (LISTEN)
jupyter-noteboo 79 root 7u IPv4 19989 0t0 TCP 172.28.0.12:9000 (LISTEN)
python3 428 root 21u IPv4 25506 0t0 TCP 127.0.0.1:38217 (LISTEN)
python3 467 root 3u IPv4 26425 0t0 TCP 127.0.0.1:43729 (LISTEN)
python3 467 root 5u IPv4 26426 0t0 TCP 127.0.0.1:60229 (LISTEN)
!lsof -n -i -P +c0 -sTCP:LISTEN
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME node 7 root 21u IPv6 20065 0t0 TCP *:8080 (LISTEN) kernel_manager_ 25 root 6u IPv4 19487 0t0 TCP 172.28.0.12:6000 (LISTEN) colab-fileshim. 70 root 3u IPv4 19330 0t0 TCP 127.0.0.1:3453 (LISTEN) jupyter-noteboo 91 root 7u IPv4 20298 0t0 TCP 172.28.0.12:9000 (LISTEN) python3 2301 root 21u IPv4 84721 0t0 TCP 127.0.0.1:37185 (LISTEN) python3 2340 root 3u IPv4 86086 0t0 TCP 127.0.0.1:35717 (LISTEN) python3 2340 root 5u IPv4 86087 0t0 TCP 127.0.0.1:45663 (LISTEN)
Options used in lsof
:
-i
specifies that you want to display only network files, that is open network connections-n
and -P
tell lsof
to show IP addresses (-n
) and ports (-P
) in numeric form. This makes lsof
faster as it saves the time for name lookups.+c0
is used to show a longer substring of the name of the UNIX command associated with the process (https://linux.die.net/man/8/lsof)-sTCP:LISTEN
filters for TCP connections in state LISTEN
See also lsof and listening ports on Stackexchange.
Start the MiniCluster as a subprocess using Python's subprocess
library.
The files out.txt
and err.txt
will contain respectively the standard output and the standard error emitted by the mapred minicluster
command.
import subprocess
with open('out.txt', "w") as stdout_file, open('err.txt', "w") as stderr_file:
process = subprocess.Popen(
["mapred", "minicluster", "-format"],
stdout=stdout_file,
stderr=stderr_file
)
Wait for a couple of seconds because the services might not be available immediately.
if not IN_COLAB:
time.sleep(30)
else:
time.sleep(10)
Now check for listening ports again (you can also refresh the next cell with ctrl-enter).
!lsof -n -i -P +c0 -sTCP:LISTEN
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME node 7 root 21u IPv6 20065 0t0 TCP *:8080 (LISTEN) kernel_manager_ 25 root 6u IPv4 19487 0t0 TCP 172.28.0.12:6000 (LISTEN) colab-fileshim. 70 root 3u IPv4 19330 0t0 TCP 127.0.0.1:3453 (LISTEN) jupyter-noteboo 91 root 7u IPv4 20298 0t0 TCP 172.28.0.12:9000 (LISTEN) python3 2301 root 21u IPv4 84721 0t0 TCP 127.0.0.1:37185 (LISTEN) python3 2340 root 3u IPv4 86086 0t0 TCP 127.0.0.1:35717 (LISTEN) python3 2340 root 5u IPv4 86087 0t0 TCP 127.0.0.1:45663 (LISTEN) java 2805 root 341u IPv4 98240 0t0 TCP 127.0.0.1:46505 (LISTEN) java 2805 root 351u IPv4 98317 0t0 TCP 127.0.0.1:44975 (LISTEN) java 2805 root 361u IPv4 98320 0t0 TCP 127.0.0.1:42669 (LISTEN) java 2805 root 364u IPv4 100111 0t0 TCP 127.0.0.1:44643 (LISTEN) java 2805 root 393u IPv4 98335 0t0 TCP 127.0.0.1:34731 (LISTEN) java 2805 root 394u IPv4 98342 0t0 TCP 127.0.0.1:36923 (LISTEN)
You should have gotten something like this (a total of $18$ listening ports associated with the MiniCluster process):
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node 6 root 21u IPv6 17373 0t0 TCP *:8080 (LISTEN)
kernel_manager_ 20 root 3u IPv4 17180 0t0 TCP 172.28.0.12:6000 (LISTEN)
colab-fileshim. 58 root 3u IPv4 19499 0t0 TCP 127.0.0.1:3453 (LISTEN)
jupyter-noteboo 75 root 7u IPv4 19658 0t0 TCP 172.28.0.12:9000 (LISTEN)
python3 4081 root 21u IPv4 108431 0t0 TCP 127.0.0.1:44519 (LISTEN)
python3 4108 root 3u IPv4 109755 0t0 TCP 127.0.0.1:46699 (LISTEN)
python3 4108 root 5u IPv4 109756 0t0 TCP 127.0.0.1:51813 (LISTEN)
java 17261 root 347u IPv4 390097 0t0 TCP 127.0.0.1:38817 (LISTEN)
java 17261 root 357u IPv4 391184 0t0 TCP 127.0.0.1:41631 (LISTEN)
java 17261 root 367u IPv4 390801 0t0 TCP 127.0.0.1:34651 (LISTEN)
java 17261 root 370u IPv4 390804 0t0 TCP 127.0.0.1:35015 (LISTEN)
java 17261 root 399u IPv4 390814 0t0 TCP 127.0.0.1:46503 (LISTEN)
java 17261 root 400u IPv4 390817 0t0 TCP 127.0.0.1:44665 (LISTEN)
java 17261 root 423u IPv4 401418 0t0 TCP *:8031 (LISTEN)
java 17261 root 440u IPv4 395796 0t0 TCP *:10033 (LISTEN)
java 17261 root 450u IPv4 400174 0t0 TCP *:19888 (LISTEN)
java 17261 root 455u IPv4 396244 0t0 TCP 127.0.0.1:43877 (LISTEN)
java 17261 root 465u IPv4 400367 0t0 TCP *:8088 (LISTEN)
java 17261 root 470u IPv4 400420 0t0 TCP *:8033 (LISTEN)
java 17261 root 490u IPv4 401422 0t0 TCP *:8030 (LISTEN)
java 17261 root 500u IPv4 400426 0t0 TCP 127.0.0.1:37335 (LISTEN)
java 17261 root 510u IPv4 400470 0t0 TCP 127.0.0.1:42337 (LISTEN)
java 17261 root 520u IPv4 401450 0t0 TCP 127.0.0.1:40401 (LISTEN)
java 17261 root 530u IPv4 401454 0t0 TCP *:42359 (LISTEN)
java 17261 root 531u IPv4 401457 0t0 TCP 127.0.0.1:38543 (LISTEN)
The java
process is the one responsible for providing the MiniCluster services by listening on several ports.
There are two known ports for the Web interfaces (see https://hadoop.apache.org/docs/.../ClusterSetup.html#Web_Interfaces):
Let us check the Web interface at port $8088$.
!wget http://localhost:8088
--2024-08-04 19:10:41-- http://localhost:8088/ Resolving localhost (localhost)... 127.0.0.1, ::1 Connecting to localhost (localhost)|127.0.0.1|:8088... failed: Connection refused. Connecting to localhost (localhost)|::1|:8088... failed: Cannot assign requested address. Retrying. --2024-08-04 19:10:42-- (try: 2) http://localhost:8088/ Connecting to localhost (localhost)|127.0.0.1|:8088... failed: Connection refused. Connecting to localhost (localhost)|::1|:8088... failed: Cannot assign requested address. Retrying. --2024-08-04 19:10:44-- (try: 3) http://localhost:8088/ Connecting to localhost (localhost)|127.0.0.1|:8088... failed: Connection refused. Connecting to localhost (localhost)|::1|:8088... failed: Cannot assign requested address. Retrying. --2024-08-04 19:10:47-- (try: 4) http://localhost:8088/ Connecting to localhost (localhost)|127.0.0.1|:8088... connected. HTTP request sent, awaiting response... 302 Found Location: http://localhost:8088/cluster [following] --2024-08-04 19:10:47-- http://localhost:8088/cluster Reusing existing connection to localhost:8088. HTTP request sent, awaiting response... 200 OK Length: 14065 (14K) [text/html] Saving to: ‘index.html’ index.html 100%[===================>] 13.74K --.-KB/s in 0.001s 2024-08-04 19:10:48 (24.8 MB/s) - ‘index.html’ saved [14065/14065]
We can serve the ResourceManager UI in the browser through Google Colab.
if IN_COLAB:
# serve the Web UI on Colab
print("Click on the link below to open the Resource Manager Web UI 🚀")
output.serve_kernel_port_as_window(8088, path='/node')
Click on the link below to open the Resource Manager Web UI 🚀
The port $19888$ is redirected to the same page as port $8088$, so it won't be very useful. I'm not sure if this is due to a missing configuration parameter or if it's a bug.
!wget http://localhost:19888
--2024-08-04 19:10:48-- http://localhost:19888/ Resolving localhost (localhost)... 127.0.0.1, ::1 Connecting to localhost (localhost)|127.0.0.1|:19888... connected. HTTP request sent, awaiting response... 302 Found Location: http://localhost:19888/cluster [following] --2024-08-04 19:10:48-- http://localhost:19888/cluster Reusing existing connection to localhost:19888. HTTP request sent, awaiting response... 200 OK Length: 14065 (14K) [text/html] Saving to: ‘index.html.1’ index.html.1 100%[===================>] 13.74K --.-KB/s in 0s 2024-08-04 19:10:48 (104 MB/s) - ‘index.html.1’ saved [14065/14065]
if IN_COLAB:
# serve the Web UI on Colab
print("Click on the link below to open the MapReduce JobHistory Server Web UI 🚀")
output.serve_kernel_port_as_window(19888, path='/node')
Click on the link below to open the MapReduce JobHistory Server Web UI 🚀
In the free tier of Google Colab this functionality might not be available (see https://research.google.com/colaboratory/faq.html#limitations-and-restrictions). As an alternative, you can use ngrok after signing up for a free account.
Check the NGROK box below if you want to use ngrok.
# you should set this to True
NGROK = False #@param {type:"boolean"}
We are going to use the Python ngrok client pyngrok
(see the Colab example).
if NGROK:
!pip install pyngrok
from pyngrok import ngrok, conf
import getpass
print("Enter your authtoken, which can be copied from https://dashboard.ngrok.com/get-started/your-authtoken")
conf.get_default().auth_token = getpass.getpass()
Collecting pyngrok Downloading pyngrok-7.2.0-py3-none-any.whl.metadata (7.4 kB) Requirement already satisfied: PyYAML>=5.1 in /usr/local/lib/python3.10/dist-packages (from pyngrok) (6.0.1) Downloading pyngrok-7.2.0-py3-none-any.whl (22 kB) Installing collected packages: pyngrok Successfully installed pyngrok-7.2.0 Enter your authtoken, which can be copied from https://dashboard.ngrok.com/get-started/your-authtoken ··········
After entering the ngrok authorization token, you can open a connection.
if NGROK:
# Open a ngrok tunnel to the HTTP server
public_url = ngrok.connect(19888).public_url
04-Aug-24 07:12:03 PM - INFO: Opening tunnel named: http-19888-9ba3fee7-3b4b-40cb-ad20-2d842c826538
04-Aug-24 07:12:04 PM - INFO: Overriding default auth token 04-Aug-24 07:12:04 PM - INFO: t=2024-08-04T19:12:04+0000 lvl=info msg="no configuration paths supplied" 04-Aug-24 07:12:04 PM - INFO: t=2024-08-04T19:12:04+0000 lvl=info msg="using configuration at default config path" path=/root/.config/ngrok/ngrok.yml 04-Aug-24 07:12:04 PM - INFO: t=2024-08-04T19:12:04+0000 lvl=info msg="open config file" path=/root/.config/ngrok/ngrok.yml err=nil 04-Aug-24 07:12:04 PM - INFO: t=2024-08-04T19:12:04+0000 lvl=info msg="starting web service" obj=web addr=127.0.0.1:4040 allow_hosts=[] 04-Aug-24 07:12:04 PM - INFO: t=2024-08-04T19:12:04+0000 lvl=info msg="client session established" obj=tunnels.session 04-Aug-24 07:12:04 PM - INFO: t=2024-08-04T19:12:04+0000 lvl=info msg="tunnel session started" obj=tunnels.session 04-Aug-24 07:12:04 PM - INFO: t=2024-08-04T19:12:04+0000 lvl=info msg=start pg=/api/tunnels id=1cc987793bc5e340 04-Aug-24 07:12:05 PM - INFO: t=2024-08-04T19:12:04+0000 lvl=info msg=end pg=/api/tunnels id=1cc987793bc5e340 status=200 dur=447.85µs 04-Aug-24 07:12:05 PM - INFO: t=2024-08-04T19:12:05+0000 lvl=info msg=start pg=/api/tunnels id=7f6c4a0097fbee1f 04-Aug-24 07:12:05 PM - INFO: t=2024-08-04T19:12:05+0000 lvl=info msg=end pg=/api/tunnels id=7f6c4a0097fbee1f status=200 dur=160.467µs 04-Aug-24 07:12:05 PM - INFO: t=2024-08-04T19:12:05+0000 lvl=info msg=start pg=/api/tunnels id=756a04f173da4cf6 04-Aug-24 07:12:05 PM - INFO: t=2024-08-04T19:12:05+0000 lvl=info msg="started tunnel" obj=tunnels name=http-19888-9ba3fee7-3b4b-40cb-ad20-2d842c826538 addr=http://localhost:19888 url=https://3a1d-34-74-204-234.ngrok-free.app 04-Aug-24 07:12:05 PM - INFO: t=2024-08-04T19:12:05+0000 lvl=info msg=end pg=/api/tunnels id=756a04f173da4cf6 status=201 dur=48.830762ms
if NGROK:
print(f'Click on {public_url} to open the MapReduce JobHistory Server Web UI')
Click on https://3a1d-34-74-204-234.ngrok-free.app to open the MapReduce JobHistory Server Web UI
You can safely ignore the warning since we are not disclosing any confidential information and proceed with clicking on the "Visit site" button.
To stop the MiniCluster subprocess use process.kill()
(remember: process
is the variable name for the MiniCluster subprocess).
process.kill()
The Java process should now be gone.
!lsof -n -i -P +c0 -sTCP:LISTEN
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME node 7 root 21u IPv6 20065 0t0 TCP *:8080 (LISTEN) kernel_manager_ 25 root 6u IPv4 19487 0t0 TCP 172.28.0.12:6000 (LISTEN) colab-fileshim. 70 root 3u IPv4 19330 0t0 TCP 127.0.0.1:3453 (LISTEN) jupyter-noteboo 91 root 7u IPv4 20298 0t0 TCP 172.28.0.12:9000 (LISTEN) python3 2301 root 21u IPv4 84721 0t0 TCP 127.0.0.1:37185 (LISTEN) python3 2340 root 3u IPv4 86086 0t0 TCP 127.0.0.1:35717 (LISTEN) python3 2340 root 5u IPv4 86087 0t0 TCP 127.0.0.1:45663 (LISTEN) ngrok 3698 root 6u IPv4 116605 0t0 TCP 127.0.0.1:4040 (LISTEN)
In case there are still some java
processes lingering around, kill them with
!pkill -f java
Verify that the Java processes are gone.
This time we will also set the ports for various services:
mapred minicluster -format -jhsport 8900 -nnhttpport 8901 -nnport 8902 -rmport 8903
Ports:
port number | description |
---|---|
8900 | JobHistoryServer port |
8901 | NameNode HTTP port |
8902 | NameNode port |
8903 | ResourceManager port |
Note: we chose these port numbers (8900, 8901, 8902, 8903) arbitrarily, you can pick other numbers as long as they do not conflict with ports that are already is use.
import subprocess
with open('out.txt', "w") as stdout_file, open('err.txt', "w") as stderr_file:
process = subprocess.Popen(
["mapred", "minicluster", "-format", "-jhsport", "8900", "-nnhttpport", "8901", "-nnport", "8902", "-rmport", "8903"],
stdout=stdout_file,
stderr=stderr_file
)
if not IN_COLAB:
time.sleep(30)
else:
time.sleep(10)
04-Aug-24 07:12:10 PM - WARNING: t=2024-08-04T19:12:10+0000 lvl=warn msg="failed to open private leg" id=3f1ee8ca6795 privaddr=localhost:19888 err="dial tcp 127.0.0.1:19888: connect: connection refused" 04-Aug-24 07:12:10 PM - WARNING: t=2024-08-04T19:12:10+0000 lvl=warn msg="failed to open private leg" id=166fe8e39b26 privaddr=localhost:19888 err="dial tcp 127.0.0.1:19888: connect: connection refused"
List the Java ports. These should be all ports associated with the MIniCluster java
process.
Note: grep "^COMMAND\|java"
means "filter out the lines that begin with the string COMMAND
or that contain the string java
". This is to preserve the header line.
!lsof -n -i -P +c0 -sTCP:LISTEN | grep "^COMMAND\|java"
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 3727 root 341u IPv4 119325 0t0 TCP 127.0.0.1:8901 (LISTEN)
You should now see
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 30246 root 347u IPv4 671659 0t0 TCP 127.0.0.1:8901 (LISTEN)
java 30246 root 357u IPv4 672029 0t0 TCP 127.0.0.1:8902 (LISTEN)
java 30246 root 367u IPv4 674008 0t0 TCP 127.0.0.1:44721 (LISTEN)
java 30246 root 370u IPv4 672082 0t0 TCP 127.0.0.1:36789 (LISTEN)
java 30246 root 399u IPv4 674082 0t0 TCP 127.0.0.1:37975 (LISTEN)
java 30246 root 400u IPv4 674085 0t0 TCP 127.0.0.1:37675 (LISTEN)
java 30246 root 423u IPv4 681319 0t0 TCP *:8031 (LISTEN)
java 30246 root 440u IPv4 680955 0t0 TCP *:10033 (LISTEN)
java 30246 root 450u IPv4 681014 0t0 TCP *:19888 (LISTEN)
java 30246 root 455u IPv4 681987 0t0 TCP 127.0.0.1:8900 (LISTEN)
java 30246 root 465u IPv4 682035 0t0 TCP *:8088 (LISTEN)
java 30246 root 470u IPv4 681313 0t0 TCP *:8033 (LISTEN)
java 30246 root 490u IPv4 681325 0t0 TCP *:8030 (LISTEN)
java 30246 root 500u IPv4 682046 0t0 TCP 127.0.0.1:8903 (LISTEN)
java 30246 root 510u IPv4 681370 0t0 TCP 127.0.0.1:34521 (LISTEN)
java 30246 root 520u IPv4 682057 0t0 TCP 127.0.0.1:36981 (LISTEN)
java 30246 root 530u IPv4 681373 0t0 TCP *:46657 (LISTEN)
java 30246 root 531u IPv4 682061 0t0 TCP 127.0.0.1:39897 (LISTEN)
Our ports $8900$, $8901$, $8902$, and $8903$ are included in the list.
The log messages are in the file err.txt
. The last 10 lines should look like this:
2023-12-27 21:36:08,820 INFO server.MiniYARNCluster: All Node Managers connected in MiniYARNCluster
2023-12-27 21:36:08,820 INFO v2.MiniMRYarnCluster: MiniMRYARN ResourceManager address: localhost:8903
2023-12-27 21:36:08,821 INFO v2.MiniMRYarnCluster: MiniMRYARN ResourceManager web address: 0.0.0.0:8088
2023-12-27 21:36:08,821 INFO v2.MiniMRYarnCluster: MiniMRYARN HistoryServer address: localhost:8900
2023-12-27 21:36:08,822 INFO v2.MiniMRYarnCluster: MiniMRYARN HistoryServer web address: 26769af38ddc:19888
2023-12-27 21:36:08,823 INFO mapreduce.MiniHadoopClusterManager: Started MiniMRCluster
!tail err.txt
2024-08-04 19:12:16,167 INFO util.GSet: capacity = 2^17 = 131072 entries 2024-08-04 19:12:16,183 INFO common.Storage: Lock on /content/target/test/data/dfs/name-0-1/in_use.lock acquired by nodename 3727@446a375b7fe4 2024-08-04 19:12:16,191 INFO common.Storage: Lock on /content/target/test/data/dfs/name-0-2/in_use.lock acquired by nodename 3727@446a375b7fe4 2024-08-04 19:12:16,195 INFO namenode.FileJournalManager: Recovering unfinalized segments in /content/target/test/data/dfs/name-0-1/current 2024-08-04 19:12:16,195 INFO namenode.FileJournalManager: Recovering unfinalized segments in /content/target/test/data/dfs/name-0-2/current 2024-08-04 19:12:16,195 INFO namenode.FSImage: No edit log streams selected. 2024-08-04 19:12:16,195 INFO namenode.FSImage: Planning to load image: FSImageFile(file=/content/target/test/data/dfs/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000) 2024-08-04 19:12:16,348 INFO namenode.FSImageFormatPBINode: Loading 1 INodes. 2024-08-04 19:12:16,360 INFO namenode.FSImageFormatPBINode: Successfully loaded 1 inodes 2024-08-04 19:12:16,383 INFO namenode.FSImageFormatPBINode: Completed update blocks map and name cache, total waiting duration 0ms.
Check the NameNode's HTTP port.
!wget http://localhost:8901
--2024-08-04 19:12:16-- http://localhost:8901/ Resolving localhost (localhost)... 127.0.0.1, ::1 Connecting to localhost (localhost)|127.0.0.1|:8901... connected. HTTP request sent, awaiting response... 302 Found Location: http://localhost:8901/index.html [following] --2024-08-04 19:12:16-- http://localhost:8901/index.html Reusing existing connection to localhost:8901. HTTP request sent, awaiting response... 200 OK Length: 1079 (1.1K) [text/html] Saving to: ‘index.html.2’ index.html.2 100%[===================>] 1.05K --.-KB/s in 0s 2024-08-04 19:12:17 (117 MB/s) - ‘index.html.2’ saved [1079/1079]
Serve the NameNode UI in the browser through Google Colab (path should be set to /index.html
as in the output of wget
otherwise the URL won't work).
if IN_COLAB and not NGROK:
# serve the Web UI on Colab
print("Click on the link below to open the NameNode Web UI 🚀")
output.serve_kernel_port_as_window(8901, path='/index.html')
else:
if NGROK:
# disconnect previous tunnels (note: you can have max 3 tunnels open!)
# see: https://pyngrok.readthedocs.io/en/latest/index.html#get-active-tunnels
tunnels = ngrok.get_tunnels()
for t in tunnels:
ngrok.disconnect(t.public_url)
# Open a ngrok tunnel to the HTTP server on port 8901
public_url = ngrok.connect(8901).public_url
print(f'Click on {public_url} to open the NameNode Web UI 🚀')
04-Aug-24 07:12:17 PM - INFO: t=2024-08-04T19:12:17+0000 lvl=info msg=start pg=/api/tunnels id=618f6ef4fc692b19 04-Aug-24 07:12:17 PM - INFO: t=2024-08-04T19:12:17+0000 lvl=info msg=end pg=/api/tunnels id=618f6ef4fc692b19 status=200 dur=265.638µs 04-Aug-24 07:12:17 PM - INFO: Disconnecting tunnel: https://3a1d-34-74-204-234.ngrok-free.app 04-Aug-24 07:12:17 PM - INFO: t=2024-08-04T19:12:17+0000 lvl=info msg=start pg=/api/tunnels/http-19888-9ba3fee7-3b4b-40cb-ad20-2d842c826538 id=26cf928d2ad02baa 04-Aug-24 07:12:17 PM - INFO: t=2024-08-04T19:12:17+0000 lvl=info msg=end pg=/api/tunnels/http-19888-9ba3fee7-3b4b-40cb-ad20-2d842c826538 id=26cf928d2ad02baa status=204 dur=36.380055ms 04-Aug-24 07:12:17 PM - INFO: t=2024-08-04T19:12:17+0000 lvl=info msg="failed to accept connection: Listener closed" obj=tunnels.session clientid=cf31c5d7baff4c19c11cf349a9ee2705 04-Aug-24 07:12:17 PM - WARNING: t=2024-08-04T19:12:17+0000 lvl=warn msg="Stopping forwarder" name=http-19888-9ba3fee7-3b4b-40cb-ad20-2d842c826538 acceptErr="failed to accept connection: Listener closed" 04-Aug-24 07:12:17 PM - INFO: t=2024-08-04T19:12:17+0000 lvl=info msg="Error handling the forwarder accept error" error="no tunnel found with requested name" 04-Aug-24 07:12:17 PM - INFO: Opening tunnel named: http-8901-dbbf6775-d1b1-4b5c-ad30-fc781622caef 04-Aug-24 07:12:17 PM - INFO: t=2024-08-04T19:12:17+0000 lvl=info msg=start pg=/api/tunnels id=3d43722d46a8ba3a 04-Aug-24 07:12:17 PM - INFO: t=2024-08-04T19:12:17+0000 lvl=info msg="started tunnel" obj=tunnels name=http-8901-dbbf6775-d1b1-4b5c-ad30-fc781622caef addr=http://localhost:8901 url=https://f668-34-74-204-234.ngrok-free.app
Click on https://f668-34-74-204-234.ngrok-free.app to open the NameNode Web UI 🚀
04-Aug-24 07:12:17 PM - INFO: t=2024-08-04T19:12:17+0000 lvl=info msg=end pg=/api/tunnels id=3d43722d46a8ba3a status=201 dur=46.18765ms
By clicking on the above link you should see the NameNode's Web UI in your browser:
Note: in order to use the MiniCluster's Hadoop filesystem you need to specify the full path prepending hdfs://localhost:8902/
otherwise hdfs
will write to the local filesystem.
%%bash
# create a folder my_dir
hdfs dfs -mkdir hdfs://localhost:8902/my_dir
List the contents of my_dir
(should be empty).
!hdfs dfs -ls hdfs://localhost:8902/my_dir
Upload the local folder sample_data
to my_dir
on HDFS
!ls -lh sample_data
total 55M -rwxr-xr-x 1 root root 1.7K Jan 1 2000 anscombe.json -rw-r--r-- 1 root root 295K Aug 1 13:24 california_housing_test.csv -rw-r--r-- 1 root root 1.7M Aug 1 13:24 california_housing_train.csv -rw-r--r-- 1 root root 18M Aug 1 13:24 mnist_test.csv -rw-r--r-- 1 root root 35M Aug 1 13:24 mnist_train_small.csv -rwxr-xr-x 1 root root 930 Jan 1 2000 README.md
Check the total size of the local folder sample_data
using the command du
("du" stands for "disk usage" and the -h
option stands for "human" as it will format file sizes in a “human-readable” fashion, e.g 55M instead of 55508) .
!du -h sample_data
55M sample_data
Upload sample_data
to HDFS.
!hdfs dfs -put sample_data hdfs://localhost:8902/my_dir/
Check
!hdfs dfs -ls hdfs://localhost:8902/my_dir
Found 1 items drwxr-xr-x - root supergroup 0 2024-08-04 19:12 hdfs://localhost:8902/my_dir/sample_data
Check the size of my_dir
on HDFS using the HDFS equivalent of du
.
!hdfs dfs -du -h hdfs://localhost:8902/my_dir
54.2 M 162.6 M hdfs://localhost:8902/my_dir/sample_data
Check the contents of the HDFS folder my_dir
!hdfs dfs -ls -R -h hdfs://localhost:8902/my_dir
drwxr-xr-x - root supergroup 0 2024-08-04 19:12 hdfs://localhost:8902/my_dir/sample_data -rw-r--r-- 3 root supergroup 930 2024-08-04 19:12 hdfs://localhost:8902/my_dir/sample_data/README.md -rw-r--r-- 3 root supergroup 1.7 K 2024-08-04 19:12 hdfs://localhost:8902/my_dir/sample_data/anscombe.json -rw-r--r-- 3 root supergroup 294.1 K 2024-08-04 19:12 hdfs://localhost:8902/my_dir/sample_data/california_housing_test.csv -rw-r--r-- 3 root supergroup 1.6 M 2024-08-04 19:12 hdfs://localhost:8902/my_dir/sample_data/california_housing_train.csv -rw-r--r-- 3 root supergroup 17.4 M 2024-08-04 19:12 hdfs://localhost:8902/my_dir/sample_data/mnist_test.csv -rw-r--r-- 3 root supergroup 34.8 M 2024-08-04 19:12 hdfs://localhost:8902/my_dir/sample_data/mnist_train_small.csv
You should now see in the Web interface that the "DFS used" has increased (you might need to refresh the NameNode UI Web page):
Remove the folder my_dir
!hdfs dfs -rm -r hdfs://localhost:8902/my_dir
Deleted hdfs://localhost:8902/my_dir
Now the DFS used should be back to ~$4$MB.
By default, hdfs
will use the local filesystem so you need to prepend hdfs://...
if you want to use the HDFS filesystem.
If you do not want to use the prefix hdfs://localhost:8902/
in the filenames, you could set the property fs.defaultFS
in core-site.xml
or else use the option -fs
like this:
hdfs dfs -fs hdfs://localhost:8902/
!hdfs dfs -fs hdfs://localhost:8902/ -ls /
Found 2 items drwxrwxrwx - root supergroup 0 2024-08-04 19:12 /content drwxr-xr-x - root supergroup 0 2024-08-04 19:12 /user
This is the same as
!hdfs dfs -ls hdfs://localhost:8902/
Found 2 items drwxrwxrwx - root supergroup 0 2024-08-04 19:12 hdfs://localhost:8902/content drwxr-xr-x - root supergroup 0 2024-08-04 19:12 hdfs://localhost:8902/user
And also the same as
!hdfs dfs -D fs.defaultFS=hdfs://localhost:8902/ -ls /
Found 2 items drwxrwxrwx - root supergroup 0 2024-08-04 19:12 /content drwxr-xr-x - root supergroup 0 2024-08-04 19:12 /user
With the option -D
we can set any variable on the fly (in this case we set fs.defaultFS
to be the HDFS filesystem).
Note: the -D
option should come before any other option.
If you configure the property fs.defaultFS
in core-site.xml
, you can also use hdfs dfs -ls /
.
with open(os.environ['HADOOP_HOME']+'/etc/hadoop/core-site.xml', 'w') as file:
file.write("""<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8902/</value>
</property>
</configuration>""")
!cat $HADOOP_HOME/etc/hadoop/core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:8902/</value> </property> </configuration>
!hdfs dfs -ls /
Found 2 items drwxrwxrwx - root supergroup 0 2024-08-04 19:12 /content drwxr-xr-x - root supergroup 0 2024-08-04 19:12 /user
Let us set the local filesystem as default (file:///
means the local filesystem file://
and the extra slash /
indicates the root folder).
with open(os.environ['HADOOP_HOME']+'/etc/hadoop/core-site.xml', 'w') as file:
file.write("""
<configuration>
<property>
<name>fs.defaultFS</name>
<value>file:///</value>
</property>
</configuration>""")
Now run !hdfs dfs -ls /
as before. This time we are going to be listing the local filesystem and not the HDFS.
!hdfs dfs -ls /
Found 29 items -rwxr-xr-x 1 root root 0 2024-08-04 18:59 /.dockerenv -rw-r--r-- 1 root root 17294 2023-11-10 04:55 /NGC-DL-CONTAINER-LICENSE drwxr-xr-x - root root 20480 2024-08-01 13:20 /bin drwxr-xr-x - root root 4096 2022-04-18 10:28 /boot drwxr-xr-x - root root 4096 2024-08-04 19:12 /content -rw-r--r-- 1 root root 4332 2023-11-10 04:56 /cuda-keyring_1.0-1_all.deb drwxr-xr-x - root root 4096 2024-08-01 13:41 /datalab drwxr-xr-x - root root 360 2024-08-04 18:59 /dev drwxr-xr-x - root root 4096 2024-08-04 18:59 /etc drwxr-xr-x - root root 4096 2022-04-18 10:28 /home drwxr-xr-x - root root 4096 2024-08-04 18:59 /kaggle drwxr-xr-x - root root 4096 2024-08-01 13:20 /lib drwxr-xr-x - root root 4096 2024-08-01 13:16 /lib32 drwxr-xr-x - root root 4096 2024-08-01 13:16 /lib64 drwxr-xr-x - root root 4096 2023-10-04 02:08 /libx32 drwxr-xr-x - root root 4096 2023-10-04 02:08 /media drwxr-xr-x - root root 4096 2023-10-04 02:08 /mnt drwxr-xr-x - root root 4096 2024-08-01 13:54 /opt dr-xr-xr-x - root root 0 2024-08-04 18:59 /proc drwxr-xr-x - root root 4096 2024-08-01 13:22 /python-apt drwx------ - root root 4096 2024-08-04 19:00 /root drwxr-xr-x - root root 4096 2024-08-01 13:15 /run drwxr-xr-x - root root 4096 2024-08-04 18:59 /sbin drwxr-xr-x - root root 4096 2023-10-04 02:08 /srv dr-xr-xr-x - root root 0 2024-08-04 18:59 /sys drwxrwxrwt - root root 4096 2024-08-04 19:12 /tmp drwxr-xr-x - root root 4096 2024-08-01 13:41 /tools drwxr-xr-x - root root 4096 2024-08-01 13:42 /usr drwxr-xr-x - root root 4096 2024-08-01 13:41 /var
I advise to get used to the fact that Hadoop interprets a file path as HDFS (hdfs://
) vs. local (file://
) depending on the setting of the variable fs.defaultFS
, since this is often a source of confusion.
hdfs dfsadmin
¶The command hdfs dfsadmin
allows to run administration tasks on the Hadoop filesystem.
!hdfs dfsadmin -h
h: Unknown command Usage: hdfs dfsadmin Note: Administrative commands can only be run as the HDFS superuser. [-report [-live] [-dead] [-decommissioning] [-enteringmaintenance] [-inmaintenance] [-slownodes]] [-safemode <enter | leave | get | wait | forceExit>] [-saveNamespace [-beforeShutdown]] [-rollEdits] [-restoreFailedStorage true|false|check] [-refreshNodes] [-setQuota <quota> <dirname>...<dirname>] [-clrQuota <dirname>...<dirname>] [-setSpaceQuota <quota> [-storageType <storagetype>] <dirname>...<dirname>] [-clrSpaceQuota [-storageType <storagetype>] <dirname>...<dirname>] [-finalizeUpgrade] [-rollingUpgrade [<query|prepare|finalize>]] [-upgrade <query | finalize>] [-refreshServiceAcl] [-refreshUserToGroupsMappings] [-refreshSuperUserGroupsConfiguration] [-refreshCallQueue] [-refresh <host:ipc_port> <key> [arg1..argn] [-reconfig <namenode|datanode> <host:ipc_port|livenodes> <start|status|properties>] [-printTopology] [-refreshNamenodes datanode_host:ipc_port] [-getVolumeReport datanode_host:ipc_port] [-deleteBlockPool datanode_host:ipc_port blockpoolId [force]] [-setBalancerBandwidth <bandwidth in bytes per second>] [-getBalancerBandwidth <datanode_host:ipc_port>] [-fetchImage <local directory>] [-allowSnapshot <snapshotDir>] [-disallowSnapshot <snapshotDir>] [-provisionSnapshotTrash <snapshotDir> [-all]] [-shutdownDatanode <datanode_host:ipc_port> [upgrade]] [-evictWriters <datanode_host:ipc_port>] [-getDatanodeInfo <datanode_host:ipc_port>] [-metasave filename] [-triggerBlockReport [-incremental] <datanode_host:ipc_port> [-namenode <namenode_host:ipc_port>]] [-listOpenFiles [-blockingDecommission] [-path <path>]] [-help [cmd]] Generic options supported are: -conf <configuration file> specify an application configuration file -D <property=value> define a value for a given property -fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations. -jt <local|resourcemanager:port> specify a ResourceManager -files <file1,...> specify a comma-separated list of files to be copied to the map reduce cluster -libjars <jar1,...> specify a comma-separated list of jar files to be included in the classpath -archives <archive1,...> specify a comma-separated list of archives to be unarchived on the compute machines The general command line syntax is: command [genericOptions] [commandOptions]
The comand hdfs dfsadmin -report
shows the current status of the Hadoop filesystem. In order to run it on our MiniCluster HDFS we need to pass the option
-fs hdfs://localhost:8902/
Alternatively, we can configure the default filesystem (the URI of the namenode) in core-site.xml
(see discussion in The default filesystem).
!hdfs dfsadmin -fs hdfs://localhost:8902/ -report
Configured Capacity: 231316381696 (215.43 GB) Present Capacity: 159504314437 (148.55 GB) DFS Remaining: 159501991936 (148.55 GB) DFS Used: 2322501 (2.21 MB) DFS Used%: 0.00% Replicated Blocks: Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 Erasure Coded Block Groups: Low redundancy block groups: 0 Block groups with corrupt internal blocks: 0 Missing block groups: 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 ------------------------------------------------- Live datanodes (1): Name: 127.0.0.1:39513 (localhost) Hostname: 127.0.0.1 Decommission Status : Normal Configured Capacity: 231316381696 (215.43 GB) DFS Used: 2322501 (2.21 MB) Non DFS Used: 71778512827 (66.85 GB) DFS Remaining: 159501991936 (148.55 GB) DFS Used%: 0.00% DFS Remaining%: 68.95% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 0 Last contact: Sun Aug 04 19:13:22 UTC 2024 Last Block Report: Sun Aug 04 19:12:23 UTC 2024 Num of Blocks: 3
The information displayed by hdfs dfsadmin
corresponds to what is presented in the NameNode Web UI.
Find the MapReduce examples that come with the Hadoop distribution.
pi
example¶!find . -name "*examples*.jar"
./hadoop-3.4.0/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-3.4.0-test-sources.jar ./hadoop-3.4.0/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-3.4.0-sources.jar ./hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar
Check if the cluster is still running, if not you will need to restart it! (from the cell cell Configure the MiniCluster's ports).
!lsof -n -i -P +c0 -sTCP:LISTEN -ac java
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 3727 root 341u IPv4 119325 0t0 TCP 127.0.0.1:8901 (LISTEN) java 3727 root 346u IPv4 120023 0t0 TCP 127.0.0.1:8902 (LISTEN) java 3727 root 361u IPv4 120629 0t0 TCP 127.0.0.1:39513 (LISTEN) java 3727 root 364u IPv4 120632 0t0 TCP 127.0.0.1:39281 (LISTEN) java 3727 root 393u IPv4 122005 0t0 TCP 127.0.0.1:39089 (LISTEN) java 3727 root 394u IPv4 122019 0t0 TCP 127.0.0.1:39631 (LISTEN) java 3727 root 416u IPv4 125902 0t0 TCP *:8031 (LISTEN) java 3727 root 434u IPv4 125459 0t0 TCP *:10033 (LISTEN) java 3727 root 444u IPv4 130724 0t0 TCP *:19888 (LISTEN) java 3727 root 445u IPv4 130725 0t0 TCP *:8088 (LISTEN) java 3727 root 454u IPv4 130963 0t0 TCP 127.0.0.1:8900 (LISTEN) java 3727 root 464u IPv4 131073 0t0 TCP *:8033 (LISTEN) java 3727 root 484u IPv4 132396 0t0 TCP *:8030 (LISTEN) java 3727 root 494u IPv4 132413 0t0 TCP 127.0.0.1:8903 (LISTEN) java 3727 root 504u IPv4 132829 0t0 TCP 127.0.0.1:43121 (LISTEN) java 3727 root 514u IPv4 131545 0t0 TCP 127.0.0.1:41277 (LISTEN) java 3727 root 524u IPv4 131549 0t0 TCP *:40533 (LISTEN) java 3727 root 525u IPv4 132834 0t0 TCP 127.0.0.1:40903 (LISTEN)
Here's ChatGPT 3.5's poem inspired by
lsof -n -i -P +c0 -sTCP:LISTEN
:
# @title
from IPython.core.display import HTML
HTML("""
<div style="background-color:rgb(16, 163, 127,.2);border:2px solid rgb(16, 163, 127,.3);padding:3px;">
<svg fill="none" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 320" style="width:32px;height:32px;">
<g fill="currentColor">
<path d="m297.06 130.97c7.26-21.79 4.76-45.66-6.85-65.48-17.46-30.4-52.56-46.04-86.84-38.68-15.25-17.18-37.16-26.95-60.13-26.81-35.04-.08-66.13 22.48-76.91 55.82-22.51 4.61-41.94 18.7-53.31 38.67-17.59 30.32-13.58 68.54 9.92 94.54-7.26 21.79-4.76 45.66 6.85 65.48 17.46 30.4 52.56 46.04 86.84 38.68 15.24 17.18 37.16 26.95 60.13 26.8 35.06.09 66.16-22.49 76.94-55.86 22.51-4.61 41.94-18.7 53.31-38.67 17.57-30.32 13.55-68.51-9.94-94.51zm-120.28 168.11c-14.03.02-27.62-4.89-38.39-13.88.49-.26 1.34-.73 1.89-1.07l63.72-36.8c3.26-1.85 5.26-5.32 5.24-9.07v-89.83l26.93 15.55c.29.14.48.42.52.74v74.39c-.04 33.08-26.83 59.9-59.91 59.97zm-128.84-55.03c-7.03-12.14-9.56-26.37-7.15-40.18.47.28 1.3.79 1.89 1.13l63.72 36.8c3.23 1.89 7.23 1.89 10.47 0l77.79-44.92v31.1c.02.32-.13.63-.38.83l-64.41 37.19c-28.69 16.52-65.33 6.7-81.92-21.95zm-16.77-139.09c7-12.16 18.05-21.46 31.21-26.29 0 .55-.03 1.52-.03 2.2v73.61c-.02 3.74 1.98 7.21 5.23 9.06l77.79 44.91-26.93 15.55c-.27.18-.61.21-.91.08l-64.42-37.22c-28.63-16.58-38.45-53.21-21.95-81.89zm221.26 51.49-77.79-44.92 26.93-15.54c.27-.18.61-.21.91-.08l64.42 37.19c28.68 16.57 38.51 53.26 21.94 81.94-7.01 12.14-18.05 21.44-31.2 26.28v-75.81c.03-3.74-1.96-7.2-5.2-9.06zm26.8-40.34c-.47-.29-1.3-.79-1.89-1.13l-63.72-36.8c-3.23-1.89-7.23-1.89-10.47 0l-77.79 44.92v-31.1c-.02-.32.13-.63.38-.83l64.41-37.16c28.69-16.55 65.37-6.7 81.91 22 6.99 12.12 9.52 26.31 7.15 40.1zm-168.51 55.43-26.94-15.55c-.29-.14-.48-.42-.52-.74v-74.39c.02-33.12 26.89-59.96 60.01-59.94 14.01 0 27.57 4.92 38.34 13.88-.49.26-1.33.73-1.89 1.07l-63.72 36.8c-3.26 1.85-5.26 5.31-5.24 9.06l-.04 89.79zm14.63-31.54 34.65-20.01 34.65 20v40.01l-34.65 20-34.65-20z"></path>
</svg>
<p>
In the realm of <span style="color: #00f;">networks</span>, where processes twine,<br>
A <code>command</code> unfolds, a <em>symphony</em> of lines.<br>
"<strong>Lsof</strong>," it whispers, with a mystic hum,<br>
A dance of flags, each one has its own drum.<br>
<p>
"<code>-n -i -P</code>," the conductor commands,<br>
Navigate swiftly, across distant lands.<br>
"<code>+c0</code>" echoes softly, a chorus of glee,<br>
Embrace all processes, as far as eyes can see.<br>
<p>
"<code>-sTCP:LISTEN</code>," a stanza profound,<br>
Seeking the echoes of ports, a network's sound.<br>
Processes in repose, in a state so keen,<br>
A tapestry of <span style="font-style: italic;">LISTEN</span>, a poetic scene.<br>
</div>
""")
In the realm of networks, where processes twine,
A command
unfolds, a symphony of lines.
"Lsof," it whispers, with a mystic hum,
A dance of flags, each one has its own drum.
"-n -i -P
," the conductor commands,
Navigate swiftly, across distant lands.
"+c0
" echoes softly, a chorus of glee,
Embrace all processes, as far as eyes can see.
"-sTCP:LISTEN
," a stanza profound,
Seeking the echoes of ports, a network's sound.
Processes in repose, in a state so keen,
A tapestry of LISTEN, a poetic scene.
Apart from this poetic digression, I consider lsof
an exceptionally valuable command.
Use the following command to get the list of available examples in the jar file.
!yarn jar ./hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar
An example program must be given as the first argument. Valid program names are: aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files. aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files. bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi. dbcount: An example job that count the pageview counts from a database. distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi. grep: A map/reduce program that counts the matches of a regex in the input. join: A job that effects a join over sorted, equally partitioned datasets multifilewc: A job that counts words from several files. pentomino: A map/reduce tile laying program to find solutions to pentomino problems. pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method. randomtextwriter: A map/reduce program that writes 10GB of random textual data per node. randomwriter: A map/reduce program that writes 10GB of random data per node. secondarysort: An example defining a secondary sort to the reduce. sort: A map/reduce program that sorts the data written by the random writer. sudoku: A sudoku solver. teragen: Generate data for the terasort terasort: Run the terasort teravalidate: Checking results of terasort wordcount: A map/reduce program that counts the words in the input files. wordmean: A map/reduce program that counts the average length of the words in the input files. wordmedian: A map/reduce program that counts the median length of the words in the input files. wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.
Let us run the pi
example (here we call it without arguments in order to get a usage message) through yarn
.
!yarn jar ./hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar pi
Usage: org.apache.hadoop.examples.QuasiMonteCarlo <nMaps> <nSamples> Generic options supported are: -conf <configuration file> specify an application configuration file -D <property=value> define a value for a given property -fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations. -jt <local|resourcemanager:port> specify a ResourceManager -files <file1,...> specify a comma-separated list of files to be copied to the map reduce cluster -libjars <jar1,...> specify a comma-separated list of jar files to be included in the classpath -archives <archive1,...> specify a comma-separated list of archives to be unarchived on the compute machines The general command line syntax is: command [genericOptions] [commandOptions]
The command takes [genericOptions]
and [commandOptions]
.
The command options are:
nMaps
, the number of mappersnSamples
, the number of iterations per mapper!yarn jar ./hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar pi \
5 1000
Number of Maps = 5 Samples per Map = 1000 Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Starting Job 2024-08-04 19:13:31,625 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2024-08-04 19:13:31,799 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2024-08-04 19:13:31,799 INFO impl.MetricsSystemImpl: JobTracker metrics system started 2024-08-04 19:13:31,979 INFO input.FileInputFormat: Total input files to process : 5 2024-08-04 19:13:32,000 INFO mapreduce.JobSubmitter: number of splits:5 2024-08-04 19:13:32,402 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local2011001580_0001 2024-08-04 19:13:32,402 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2024-08-04 19:13:32,687 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 2024-08-04 19:13:32,688 INFO mapreduce.Job: Running job: job_local2011001580_0001 2024-08-04 19:13:32,699 INFO mapred.LocalJobRunner: OutputCommitter set in config null 2024-08-04 19:13:32,715 INFO output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:13:32,717 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:13:32,717 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:13:32,719 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2024-08-04 19:13:32,814 INFO mapred.LocalJobRunner: Waiting for map tasks 2024-08-04 19:13:32,815 INFO mapred.LocalJobRunner: Starting task: attempt_local2011001580_0001_m_000000_0 2024-08-04 19:13:32,870 INFO output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:13:32,870 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:13:32,871 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:13:32,918 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:13:32,925 INFO mapred.MapTask: Processing split: file:/content/QuasiMonteCarlo_1722798810530_1260429884/in/part0:0+118 2024-08-04 19:13:33,034 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:13:33,034 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:13:33,034 INFO mapred.MapTask: soft limit at 83886080 2024-08-04 19:13:33,034 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:13:33,034 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:13:33,040 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:13:33,065 INFO mapred.LocalJobRunner: 2024-08-04 19:13:33,066 INFO mapred.MapTask: Starting flush of map output 2024-08-04 19:13:33,066 INFO mapred.MapTask: Spilling map output 2024-08-04 19:13:33,066 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:13:33,066 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:13:33,075 INFO mapred.MapTask: Finished spill 0 2024-08-04 19:13:33,119 INFO mapred.Task: Task:attempt_local2011001580_0001_m_000000_0 is done. And is in the process of committing 2024-08-04 19:13:33,124 INFO mapred.LocalJobRunner: Generated 1000 samples. 2024-08-04 19:13:33,124 INFO mapred.Task: Task 'attempt_local2011001580_0001_m_000000_0' done. 2024-08-04 19:13:33,149 INFO mapred.Task: Final Counters for attempt_local2011001580_0001_m_000000_0: Counters: 17 File System Counters FILE: Number of bytes read=282496 FILE: Number of bytes written=997502 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=128 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=23 Total committed heap usage (bytes)=354418688 File Input Format Counters Bytes Read=130 2024-08-04 19:13:33,149 INFO mapred.LocalJobRunner: Finishing task: attempt_local2011001580_0001_m_000000_0 2024-08-04 19:13:33,150 INFO mapred.LocalJobRunner: Starting task: attempt_local2011001580_0001_m_000001_0 2024-08-04 19:13:33,152 INFO output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:13:33,152 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:13:33,152 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:13:33,153 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:13:33,156 INFO mapred.MapTask: Processing split: file:/content/QuasiMonteCarlo_1722798810530_1260429884/in/part3:0+118 2024-08-04 19:13:33,173 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:13:33,173 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:13:33,173 INFO mapred.MapTask: soft limit at 83886080 2024-08-04 19:13:33,173 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:13:33,173 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:13:33,174 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:13:33,178 INFO mapred.LocalJobRunner: 2024-08-04 19:13:33,179 INFO mapred.MapTask: Starting flush of map output 2024-08-04 19:13:33,179 INFO mapred.MapTask: Spilling map output 2024-08-04 19:13:33,179 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:13:33,179 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:13:33,181 INFO mapred.MapTask: Finished spill 0 2024-08-04 19:13:33,185 INFO mapred.Task: Task:attempt_local2011001580_0001_m_000001_0 is done. And is in the process of committing 2024-08-04 19:13:33,187 INFO mapred.LocalJobRunner: Generated 1000 samples. 2024-08-04 19:13:33,188 INFO mapred.Task: Task 'attempt_local2011001580_0001_m_000001_0' done. 2024-08-04 19:13:33,188 INFO mapred.Task: Final Counters for attempt_local2011001580_0001_m_000001_0: Counters: 17 File System Counters FILE: Number of bytes read=283289 FILE: Number of bytes written=997562 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=128 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 Total committed heap usage (bytes)=354418688 File Input Format Counters Bytes Read=130 2024-08-04 19:13:33,189 INFO mapred.LocalJobRunner: Finishing task: attempt_local2011001580_0001_m_000001_0 2024-08-04 19:13:33,192 INFO mapred.LocalJobRunner: Starting task: attempt_local2011001580_0001_m_000002_0 2024-08-04 19:13:33,193 INFO output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:13:33,193 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:13:33,193 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:13:33,194 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:13:33,196 INFO mapred.MapTask: Processing split: file:/content/QuasiMonteCarlo_1722798810530_1260429884/in/part2:0+118 2024-08-04 19:13:33,239 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:13:33,239 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:13:33,239 INFO mapred.MapTask: soft limit at 83886080 2024-08-04 19:13:33,239 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:13:33,240 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:13:33,242 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:13:33,246 INFO mapred.LocalJobRunner: 2024-08-04 19:13:33,247 INFO mapred.MapTask: Starting flush of map output 2024-08-04 19:13:33,247 INFO mapred.MapTask: Spilling map output 2024-08-04 19:13:33,247 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:13:33,247 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:13:33,248 INFO mapred.MapTask: Finished spill 0 2024-08-04 19:13:33,252 INFO mapred.Task: Task:attempt_local2011001580_0001_m_000002_0 is done. And is in the process of committing 2024-08-04 19:13:33,256 INFO mapred.LocalJobRunner: Generated 1000 samples. 2024-08-04 19:13:33,257 INFO mapred.Task: Task 'attempt_local2011001580_0001_m_000002_0' done. 2024-08-04 19:13:33,257 INFO mapred.Task: Final Counters for attempt_local2011001580_0001_m_000002_0: Counters: 17 File System Counters FILE: Number of bytes read=284082 FILE: Number of bytes written=997622 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=128 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=16 Total committed heap usage (bytes)=354418688 File Input Format Counters Bytes Read=130 2024-08-04 19:13:33,258 INFO mapred.LocalJobRunner: Finishing task: attempt_local2011001580_0001_m_000002_0 2024-08-04 19:13:33,258 INFO mapred.LocalJobRunner: Starting task: attempt_local2011001580_0001_m_000003_0 2024-08-04 19:13:33,259 INFO output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:13:33,259 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:13:33,259 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:13:33,260 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:13:33,262 INFO mapred.MapTask: Processing split: file:/content/QuasiMonteCarlo_1722798810530_1260429884/in/part4:0+118 2024-08-04 19:13:33,392 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:13:33,392 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:13:33,392 INFO mapred.MapTask: soft limit at 83886080 2024-08-04 19:13:33,393 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:13:33,393 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:13:33,394 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:13:33,397 INFO mapred.LocalJobRunner: 2024-08-04 19:13:33,397 INFO mapred.MapTask: Starting flush of map output 2024-08-04 19:13:33,398 INFO mapred.MapTask: Spilling map output 2024-08-04 19:13:33,398 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:13:33,398 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:13:33,399 INFO mapred.MapTask: Finished spill 0 2024-08-04 19:13:33,403 INFO mapred.Task: Task:attempt_local2011001580_0001_m_000003_0 is done. And is in the process of committing 2024-08-04 19:13:33,411 INFO mapred.LocalJobRunner: Generated 1000 samples. 2024-08-04 19:13:33,415 INFO mapred.Task: Task 'attempt_local2011001580_0001_m_000003_0' done. 2024-08-04 19:13:33,415 INFO mapred.Task: Final Counters for attempt_local2011001580_0001_m_000003_0: Counters: 17 File System Counters FILE: Number of bytes read=284875 FILE: Number of bytes written=997682 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=128 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=0 Total committed heap usage (bytes)=354418688 File Input Format Counters Bytes Read=130 2024-08-04 19:13:33,417 INFO mapred.LocalJobRunner: Finishing task: attempt_local2011001580_0001_m_000003_0 2024-08-04 19:13:33,417 INFO mapred.LocalJobRunner: Starting task: attempt_local2011001580_0001_m_000004_0 2024-08-04 19:13:33,422 INFO output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:13:33,423 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:13:33,423 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:13:33,423 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:13:33,431 INFO mapred.MapTask: Processing split: file:/content/QuasiMonteCarlo_1722798810530_1260429884/in/part1:0+118 2024-08-04 19:13:33,458 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:13:33,458 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:13:33,458 INFO mapred.MapTask: soft limit at 83886080 2024-08-04 19:13:33,458 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:13:33,458 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:13:33,462 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:13:33,465 INFO mapred.LocalJobRunner: 2024-08-04 19:13:33,474 INFO mapred.MapTask: Starting flush of map output 2024-08-04 19:13:33,474 INFO mapred.MapTask: Spilling map output 2024-08-04 19:13:33,474 INFO mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:13:33,474 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:13:33,479 INFO mapred.MapTask: Finished spill 0 2024-08-04 19:13:33,493 INFO mapred.Task: Task:attempt_local2011001580_0001_m_000004_0 is done. And is in the process of committing 2024-08-04 19:13:33,517 INFO mapred.LocalJobRunner: Generated 1000 samples. 2024-08-04 19:13:33,518 INFO mapred.Task: Task 'attempt_local2011001580_0001_m_000004_0' done. 2024-08-04 19:13:33,519 INFO mapred.Task: Final Counters for attempt_local2011001580_0001_m_000004_0: Counters: 17 File System Counters FILE: Number of bytes read=285156 FILE: Number of bytes written=997742 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=128 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=8 Total committed heap usage (bytes)=473956352 File Input Format Counters Bytes Read=130 2024-08-04 19:13:33,520 INFO mapred.LocalJobRunner: Finishing task: attempt_local2011001580_0001_m_000004_0 2024-08-04 19:13:33,520 INFO mapred.LocalJobRunner: map task executor complete. 2024-08-04 19:13:33,532 INFO mapred.LocalJobRunner: Waiting for reduce tasks 2024-08-04 19:13:33,534 INFO mapred.LocalJobRunner: Starting task: attempt_local2011001580_0001_r_000000_0 2024-08-04 19:13:33,561 INFO output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:13:33,561 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:13:33,561 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:13:33,562 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:13:33,567 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@5046a72e 2024-08-04 19:13:33,570 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized! 2024-08-04 19:13:33,603 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=2382574336, maxSingleShuffleLimit=595643584, mergeThreshold=1572499072, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2024-08-04 19:13:33,613 INFO reduce.EventFetcher: attempt_local2011001580_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2024-08-04 19:13:33,688 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2011001580_0001_m_000004_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:13:33,696 INFO mapreduce.Job: Job job_local2011001580_0001 running in uber mode : false 2024-08-04 19:13:33,696 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_local2011001580_0001_m_000004_0 2024-08-04 19:13:33,696 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->24 2024-08-04 19:13:33,706 INFO mapreduce.Job: map 100% reduce 0% 2024-08-04 19:13:33,711 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2011001580_0001_m_000001_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:13:33,716 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_local2011001580_0001_m_000001_0 2024-08-04 19:13:33,716 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 2, commitMemory -> 24, usedMemory ->48 2024-08-04 19:13:33,722 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2011001580_0001_m_000002_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:13:33,726 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_local2011001580_0001_m_000002_0 2024-08-04 19:13:33,726 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 3, commitMemory -> 48, usedMemory ->72 2024-08-04 19:13:33,729 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2011001580_0001_m_000000_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:13:33,731 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_local2011001580_0001_m_000000_0 2024-08-04 19:13:33,731 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 4, commitMemory -> 72, usedMemory ->96 2024-08-04 19:13:33,737 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local2011001580_0001_m_000003_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:13:33,740 INFO reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_local2011001580_0001_m_000003_0 2024-08-04 19:13:33,741 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 5, commitMemory -> 96, usedMemory ->120 2024-08-04 19:13:33,742 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning 2024-08-04 19:13:33,743 INFO mapred.LocalJobRunner: 5 / 5 copied. 2024-08-04 19:13:33,744 INFO reduce.MergeManagerImpl: finalMerge called with 5 in-memory map-outputs and 0 on-disk map-outputs 2024-08-04 19:13:33,754 INFO mapred.Merger: Merging 5 sorted segments 2024-08-04 19:13:33,754 INFO mapred.Merger: Down to the last merge-pass, with 5 segments left of total size: 105 bytes 2024-08-04 19:13:33,757 INFO reduce.MergeManagerImpl: Merged 5 segments, 120 bytes to disk to satisfy reduce memory limit 2024-08-04 19:13:33,757 INFO reduce.MergeManagerImpl: Merging 1 files, 116 bytes from disk 2024-08-04 19:13:33,759 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce 2024-08-04 19:13:33,759 INFO mapred.Merger: Merging 1 sorted segments 2024-08-04 19:13:33,759 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 109 bytes 2024-08-04 19:13:33,760 INFO mapred.LocalJobRunner: 5 / 5 copied. 2024-08-04 19:13:33,765 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2024-08-04 19:13:33,779 INFO mapred.Task: Task:attempt_local2011001580_0001_r_000000_0 is done. And is in the process of committing 2024-08-04 19:13:33,783 INFO mapred.LocalJobRunner: 5 / 5 copied. 2024-08-04 19:13:33,783 INFO mapred.Task: Task attempt_local2011001580_0001_r_000000_0 is allowed to commit now 2024-08-04 19:13:33,786 INFO output.FileOutputCommitter: Saved output of task 'attempt_local2011001580_0001_r_000000_0' to file:/content/QuasiMonteCarlo_1722798810530_1260429884/out 2024-08-04 19:13:33,788 INFO mapred.LocalJobRunner: reduce > reduce 2024-08-04 19:13:33,788 INFO mapred.Task: Task 'attempt_local2011001580_0001_r_000000_0' done. 2024-08-04 19:13:33,792 INFO mapred.Task: Final Counters for attempt_local2011001580_0001_r_000000_0: Counters: 24 File System Counters FILE: Number of bytes read=285572 FILE: Number of bytes written=998097 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=140 Reduce input records=10 Reduce output records=0 Spilled Records=10 Shuffled Maps =5 Failed Shuffles=0 Merged Map outputs=5 GC time elapsed (ms)=0 Total committed heap usage (bytes)=473956352 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Output Format Counters Bytes Written=109 2024-08-04 19:13:33,792 INFO mapred.LocalJobRunner: Finishing task: attempt_local2011001580_0001_r_000000_0 2024-08-04 19:13:33,794 INFO mapred.LocalJobRunner: reduce task executor complete. 2024-08-04 19:13:34,708 INFO mapreduce.Job: map 100% reduce 100% 2024-08-04 19:13:34,709 INFO mapreduce.Job: Job job_local2011001580_0001 completed successfully 2024-08-04 19:13:34,753 INFO mapreduce.Job: Counters: 30 File System Counters FILE: Number of bytes read=1705470 FILE: Number of bytes written=5986207 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=5 Map output records=10 Map output bytes=90 Map output materialized bytes=140 Input split bytes=640 Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=140 Reduce input records=10 Reduce output records=0 Spilled Records=20 Shuffled Maps =5 Failed Shuffles=0 Merged Map outputs=5 GC time elapsed (ms)=47 Total committed heap usage (bytes)=2365587456 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=650 File Output Format Counters Bytes Written=109 Job Finished in 3.28 seconds Estimated value of Pi is 3.14160000000000000000
The job completed successfully, however it did not run on the MiniCluster because we did not specify the MiniCluster's Yarn Resource Manager address.
In fact, yarn application -list
returns no apps (neither running nor finished).
!yarn application -D yarn.resourcemanager.address=localhost:8903 -list -appStates ALL
2024-08-04 19:13:37,808 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at localhost/127.0.0.1:8903 Total number of applications (application-types: [], states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED] and tags: []):0 Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL
In order to be able to submit the job to the MiniCluster with YARN we need to edit three files:
mapred-site.xml
core-site.xml
yarn-site.xml
file_mapred_site = os.path.join(os.environ['HADOOP_HOME'],'etc/hadoop/mapred-site.xml')
file_core_site = os.path.join(os.environ['HADOOP_HOME'],'etc/hadoop/core-site.xml')
file_yarn_site = os.path.join(os.environ['HADOOP_HOME'],'etc/hadoop/yarn-site.xml')
%%bash
cat > $HADOOP_HOME/'etc/hadoop/mapred-site.xml' << 🐸
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>${HADOOP_HOME}/share/hadoop/mapreduce/*:${HADOOP_HOME}/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>
🐸
!cat $HADOOP_HOME/'etc/hadoop/mapred-site.xml'
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value>/content/hadoop-3.4.0/share/hadoop/mapreduce/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*</value> </property> </configuration>
Set the default filesystem as the MiniCluster's filesystem in core-site.xml
. This is necessary in order to allow YARN to save the applications logs to HDFS and this is why we create the directory /tmp/logs
on HDFS.
with open(file_core_site, 'w') as file:
file.write("""
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8902/</value>
</property>
</configuration>""")
Set yarn.resourcemanager.address=localhost:8903
in yarn-site.xml
.
with open(file_yarn_site, 'w') as file:
file.write("""
<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:8903</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
</configuration>""")
It might be necessary at this point to re-start the cluster.
process.kill()
!pkill -f java # kill java processes
with open('out.txt', "w") as stdout_file, open('err.txt', "w") as stderr_file:
process = subprocess.Popen(
["mapred", "minicluster", "-format", "-jhsport", "8900", "-nnhttpport", "8901", "-nnport", "8902", "-rmport", "8903"],
stdout=stdout_file,
stderr=stderr_file
)
Verify that the MiniCluster is running.
for att in range(10):
with open('err.txt') as myfile:
if 'Started MiniMRCluster' in myfile.read():
print('MiniCluster is up and running')
break
else:
time.sleep(2)
MiniCluster is up and running
!lsof -n -i -P +c0 -sTCP:LISTEN -ac java
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 5488 root 341u IPv4 149683 0t0 TCP 127.0.0.1:8901 (LISTEN) java 5488 root 351u IPv4 148854 0t0 TCP 127.0.0.1:8902 (LISTEN) java 5488 root 361u IPv4 148862 0t0 TCP 127.0.0.1:36547 (LISTEN) java 5488 root 364u IPv4 148865 0t0 TCP 127.0.0.1:40385 (LISTEN) java 5488 root 393u IPv4 150500 0t0 TCP 127.0.0.1:42681 (LISTEN) java 5488 root 394u IPv4 148903 0t0 TCP 127.0.0.1:34431 (LISTEN) java 5488 root 420u IPv4 158000 0t0 TCP *:8031 (LISTEN) java 5488 root 434u IPv4 154506 0t0 TCP *:10033 (LISTEN) java 5488 root 444u IPv4 157946 0t0 TCP *:19888 (LISTEN) java 5488 root 446u IPv4 154605 0t0 TCP *:8088 (LISTEN) java 5488 root 454u IPv4 157989 0t0 TCP 127.0.0.1:8900 (LISTEN) java 5488 root 464u IPv4 157995 0t0 TCP *:8033 (LISTEN) java 5488 root 484u IPv4 158005 0t0 TCP *:8030 (LISTEN) java 5488 root 494u IPv4 158009 0t0 TCP 127.0.0.1:8903 (LISTEN) java 5488 root 504u IPv4 158797 0t0 TCP 127.0.0.1:34535 (LISTEN) java 5488 root 514u IPv4 158027 0t0 TCP 127.0.0.1:34217 (LISTEN) java 5488 root 524u IPv4 158801 0t0 TCP *:43847 (LISTEN) java 5488 root 525u IPv4 158031 0t0 TCP 127.0.0.1:37361 (LISTEN)
Submit the app again. You should see at the very beginning of the output:
Connecting to ResourceManager at localhost/127.0.0.1:8903
This means that YARN has read its configuration file.
pi
app on the MiniCluster with YARN¶Let us run the job on the MiniCluster.
!yarn jar ./hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar pi \
5 1000
Number of Maps = 5 Samples per Map = 1000
04-Aug-24 07:13:59 PM - INFO: t=2024-08-04T19:13:59+0000 lvl=info msg="join connections" obj=join id=4944f91c91d7 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49611 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=b6557ad3937b l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49611 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=31c3ae28621f l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=5aa288b19d68 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=ec278b8a4015 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=c8a82bdeb45c l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=6732850300e3 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=a6678ebec0fb l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=88e97548a4a8 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=cd4c21a2f9be l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=a1f21ec754c8 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=524ebc6a8a13 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=01a4d73455a4 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=d2679b39ea53 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=0ac5e9002df7 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:00 PM - INFO: t=2024-08-04T19:14:00+0000 lvl=info msg="join connections" obj=join id=4088f300f1b2 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:01 PM - INFO: t=2024-08-04T19:14:01+0000 lvl=info msg="join connections" obj=join id=2c22c3282878 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:01 PM - INFO: t=2024-08-04T19:14:01+0000 lvl=info msg="join connections" obj=join id=9b7da82d412c l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:01 PM - INFO: t=2024-08-04T19:14:01+0000 lvl=info msg="join connections" obj=join id=9ad5e9b956c0 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:01 PM - INFO: t=2024-08-04T19:14:01+0000 lvl=info msg="join connections" obj=join id=2aaca7c3ee6e l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:01 PM - INFO: t=2024-08-04T19:14:01+0000 lvl=info msg="join connections" obj=join id=71230148c91d l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:01 PM - INFO: t=2024-08-04T19:14:01+0000 lvl=info msg="join connections" obj=join id=fdd27abddc4c l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:01 PM - INFO: t=2024-08-04T19:14:01+0000 lvl=info msg="join connections" obj=join id=dd6afb0f3192 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620 04-Aug-24 07:14:01 PM - INFO: t=2024-08-04T19:14:01+0000 lvl=info msg="join connections" obj=join id=d4821357ab57 l=127.0.0.1:8901 r=[2a02:8388:6cc5:e800:70ee:91e:4517:7eea]:49620
Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Starting Job 2024-08-04 19:14:02,329 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at localhost/127.0.0.1:8903 2024-08-04 19:14:03,303 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1722798834650_0001 2024-08-04 19:14:03,915 INFO input.FileInputFormat: Total input files to process : 5 2024-08-04 19:14:04,481 INFO mapreduce.JobSubmitter: number of splits:5 2024-08-04 19:14:05,484 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1722798834650_0001 2024-08-04 19:14:05,485 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2024-08-04 19:14:06,493 INFO conf.Configuration: resource-types.xml not found 2024-08-04 19:14:06,503 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2024-08-04 19:14:07,764 INFO impl.YarnClientImpl: Submitted application application_1722798834650_0001 2024-08-04 19:14:07,940 INFO mapreduce.Job: The url to track the job: http://446a375b7fe4:8088/proxy/application_1722798834650_0001/ 2024-08-04 19:14:07,947 INFO mapreduce.Job: Running job: job_1722798834650_0001 2024-08-04 19:14:21,800 INFO mapreduce.Job: Job job_1722798834650_0001 running in uber mode : false 2024-08-04 19:14:21,802 INFO mapreduce.Job: map 0% reduce 0% 2024-08-04 19:14:35,193 INFO mapreduce.Job: map 40% reduce 0% 2024-08-04 19:14:47,348 INFO mapreduce.Job: map 80% reduce 0% 2024-08-04 19:14:59,592 INFO mapreduce.Job: map 100% reduce 0% 2024-08-04 19:15:00,604 INFO mapreduce.Job: map 100% reduce 100% 2024-08-04 19:15:01,621 INFO mapreduce.Job: Job job_1722798834650_0001 completed successfully 2024-08-04 19:15:01,756 INFO mapreduce.Job: Counters: 54 File System Counters FILE: Number of bytes read=116 FILE: Number of bytes written=1856793 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=1320 HDFS: Number of bytes written=215 HDFS: Number of read operations=25 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 HDFS: Number of bytes read erasure-coded=0 Job Counters Launched map tasks=5 Launched reduce tasks=1 Data-local map tasks=5 Total time spent by all maps in occupied slots (ms)=53665 Total time spent by all reduces in occupied slots (ms)=11104 Total time spent by all map tasks (ms)=53665 Total time spent by all reduce tasks (ms)=11104 Total vcore-milliseconds taken by all map tasks=53665 Total vcore-milliseconds taken by all reduce tasks=11104 Total megabyte-milliseconds taken by all map tasks=54952960 Total megabyte-milliseconds taken by all reduce tasks=11370496 Map-Reduce Framework Map input records=5 Map output records=10 Map output bytes=90 Map output materialized bytes=140 Input split bytes=730 Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=140 Reduce input records=10 Reduce output records=0 Spilled Records=20 Shuffled Maps =5 Failed Shuffles=0 Merged Map outputs=5 GC time elapsed (ms)=505 CPU time spent (ms)=5570 Physical memory (bytes) snapshot=1933701120 Virtual memory (bytes) snapshot=16332873728 Total committed heap usage (bytes)=1436549120 Peak Map Physical memory (bytes)=383688704 Peak Map Virtual memory (bytes)=2723983360 Peak Reduce Physical memory (bytes)=217210880 Peak Reduce Virtual memory (bytes)=2730729472 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=590 File Output Format Counters Bytes Written=97 Job Finished in 59.652 seconds Estimated value of Pi is 3.14160000000000000000
We can now see the finished app listed in the YARN resource manager (note that this time we do not need to specify the Resource Manager's address with the option -D yarn.resourcemanager.address=localhost:8903
).
!yarn application -list -appStates ALL
2024-08-04 19:15:05,035 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at localhost/127.0.0.1:8903 Total number of applications (application-types: [], states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED] and tags: []):1 Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL application_1722798834650_0001 QuasiMonteCarlo MAPREDUCE root root.default FINISHED SUCCEEDED 100% http://446a375b7fe4:19888/jobhistory/job/job_1722798834650_0001
pi
app in the background¶Let us run the pi
app in the background (as a subprocess) and with more mappers, so that it lasts longer and we are able to monitor its progress with the yarn
command-line.
with open('job_out.txt', "w") as stdout_file, open('job_err.txt', "w") as stderr_file:
process = subprocess.Popen(
["yarn", "jar", "./hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar", "pi",
"-D", "localhost:8903",
"50", "1000000"],
stdout=stdout_file,
stderr=stderr_file
)
time.sleep(10)
!yarn application -list -appStates ALL
2024-08-04 19:15:21,096 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at localhost/127.0.0.1:8903 Total number of applications (application-types: [], states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED] and tags: []):2 Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL application_1722798834650_0001 QuasiMonteCarlo MAPREDUCE root root.default FINISHED SUCCEEDED 100% http://446a375b7fe4:19888/jobhistory/job/job_1722798834650_0001 application_1722798834650_0002 QuasiMonteCarlo MAPREDUCE root root.default ACCEPTED UNDEFINED 0% N/A
If you do not see the newly submitted application in the YARN queue yet, give it some time and re-run the yarn application -list
command!
!yarn application -list -appStates ALL
2024-08-04 19:15:28,650 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at localhost/127.0.0.1:8903 Total number of applications (application-types: [], states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED] and tags: []):2 Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL application_1722798834650_0001 QuasiMonteCarlo MAPREDUCE root root.default FINISHED SUCCEEDED 100% http://446a375b7fe4:19888/jobhistory/job/job_1722798834650_0001 application_1722798834650_0002 QuasiMonteCarlo MAPREDUCE root root.default ACCEPTED UNDEFINED 0% N/A
You should now see something like this:
Total number of applications (application-types: [], states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED] and tags: []):2
Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL
application_1707941943926_0001 QuasiMonteCarlo MAPREDUCE root default FINISHED SUCCEEDED 100% http://c532258dcee8:19888/jobhistory/job/job_1707941943926_0001
application_1707941943926_0002 QuasiMonteCarlo MAPREDUCE root default RUNNING UNDEFINED 5.18% http://localhost:43121
The application in status RUNNING
is the most recently submitted.
To view the logs of a finished application use:
yarn logs -applicationID <your app ID>
For the sake of this demo, we are going to pick the id if the first successfully finished app with the following shell command:
!yarn application -list -appStates FINISHED 2>/dev/null|grep SUCCEEDED|tail -1| cut -f1
application_1722798834650_0001
View the logs for the selected application id (warning: it's a large file!).
%%bash
app_id=$(yarn application -list -appStates FINISHED 2>/dev/null|grep SUCCEEDED|tail -1| cut -f1)
yarn logs -applicationId $app_id
Container: container_1722798834650_0001_01_000004 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:directory.info LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:2101 LogContents: ls -l: total 32 -rw-r--r-- 1 root root 129 Aug 4 19:14 container_tokens -rwx------ 1 root root 964 Aug 4 19:14 default_container_executor_session.sh -rwx------ 1 root root 1019 Aug 4 19:14 default_container_executor.sh lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.jar -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.xml -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml -rwx------ 1 root root 8159 Aug 4 19:14 launch_container.sh drwx--x--- 2 root root 4096 Aug 4 19:14 tmp find -L . -maxdepth 5 -ls: 809807 4 drwx--x--- 3 root root 4096 Aug 4 19:14 . 809848 4 -rwx------ 1 root root 1019 Aug 4 19:14 ./default_container_executor.sh 809751 260 -r-x------ 1 root root 264361 Aug 4 19:14 ./job.xml 809822 4 -rw-r--r-- 1 root root 129 Aug 4 19:14 ./container_tokens 809841 4 -rwx------ 1 root root 964 Aug 4 19:14 ./default_container_executor_session.sh 809826 8 -rwx------ 1 root root 8159 Aug 4 19:14 ./launch_container.sh 809849 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor.sh.crc 809840 4 -rw-r--r-- 1 root root 72 Aug 4 19:14 ./.launch_container.sh.crc 809745 4 drwx------ 2 root root 4096 Aug 4 19:14 ./job.jar 809746 276 -r-x------ 1 root root 281609 Aug 4 19:14 ./job.jar/job.jar 809823 4 -rw-r--r-- 1 root root 12 Aug 4 19:14 ./.container_tokens.crc 809847 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor_session.sh.crc 809821 4 drwx--x--- 2 root root 4096 Aug 4 19:14 ./tmp broken symlinks(find -L . -maxdepth 5 -type l -ls): End of LogType:directory.info ******************************************************************************* Container: container_1722798834650_0001_01_000004 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:launch_container.sh LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:8159 LogContents: #!/bin/bash set -o pipefail -e export PRELAUNCH_OUT="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/prelaunch.out" exec >"${PRELAUNCH_OUT}" export PRELAUNCH_ERR="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/prelaunch.err" exec 2>"${PRELAUNCH_ERR}" echo "Setting up env variables" export JAVA_HOME=${JAVA_HOME:-"/usr/lib/jvm/java-11-openjdk-amd64"} export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-"/content/hadoop-3.4.0"} export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/content/hadoop-3.4.0/etc/hadoop"} export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HOME=${HADOOP_HOME:-"/content/hadoop-3.4.0"} export PATH=${PATH:-"/content/hadoop-3.4.0/bin:/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin"} export LANG=${LANG:-"en_US.UTF-8"} export HADOOP_TOKEN_FILE_LOCATION="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000004/container_tokens" export CONTAINER_ID="container_1722798834650_0001_01_000004" export NM_PORT="34535" export NM_HOST="localhost" export NM_HTTP_PORT="37361" export LOCAL_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001" export LOCAL_USER_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/" export LOG_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000004,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000004,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000004,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004" export USER="root" export LOGNAME="root" export HOME="/home/" export PWD="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000004" export LOCALIZATION_COUNTERS="0,548046,0,2,4" export JVM_PID="$$" export NM_AUX_SERVICE_mapreduce_shuffle="AACrRwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" export STDOUT_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/stdout" export SHELL="/bin/bash" export HADOOP_ROOT_LOGGER="INFO,console" export CLASSPATH="$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/*:job.jar/classes/:job.jar/lib/*:$PWD/*" export LD_LIBRARY_PATH="$PWD:$HADOOP_COMMON_HOME/lib/native" export STDERR_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/stderr" export HADOOP_CLIENT_OPTS="" export MALLOC_ARENA_MAX="4" echo "Setting up job resources" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml" "job.xml" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar" "job.jar" echo "Copying debugging information" # Creating copy of launch script cp "launch_container.sh" "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/launch_container.sh" chmod 640 "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/launch_container.sh" # Determining directory contents echo "ls -l:" 1>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/directory.info" ls -l 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/directory.info" echo "find -L . -maxdepth 5 -ls:" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/directory.info" find -L . -maxdepth 5 -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/directory.info" echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/directory.info" find -L . -maxdepth 5 -type l -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/directory.info" echo "Launching container" exec /bin/bash -c "$JAVA_HOME/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=$PWD/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 40041 attempt_1722798834650_0001_m_000002_0 4 1>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/stdout 2>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000004/stderr " End of LogType:launch_container.sh ************************************************************************************ Container: container_1722798834650_0001_01_000004 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.err LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:prelaunch.err ****************************************************************************** Container: container_1722798834650_0001_01_000004 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.out LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:100 LogContents: Setting up env variables Setting up job resources Copying debugging information Launching container End of LogType:prelaunch.out ****************************************************************************** Container: container_1722798834650_0001_01_000004 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stderr LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stderr *********************************************************************** Container: container_1722798834650_0001_01_000004 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stdout LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stdout *********************************************************************** Container: container_1722798834650_0001_01_000004 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:syslog LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:31510 LogContents: 2024-08-04 19:14:39,635 INFO [main] org.apache.hadoop.security.SecurityUtil: Updating Configuration 2024-08-04 19:14:40,070 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2024-08-04 19:14:40,481 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2024-08-04 19:14:40,481 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started 2024-08-04 19:14:40,778 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens: [Kind: mapreduce.job, Service: job_1722798834650_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@70f02c32)] 2024-08-04 19:14:40,910 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now. 2024-08-04 19:14:41,581 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001 2024-08-04 19:14:43,074 INFO [main] org.apache.hadoop.mapred.YarnChild: /************************************************************ [system properties] os.name: Linux os.version: 6.1.85+ java.home: /usr/lib/jvm/java-11-openjdk-amd64 java.runtime.version: 11.0.24+8-post-Ubuntu-1ubuntu322.04 java.vendor: Ubuntu java.version: 11.0.24 java.vm.name: OpenJDK 64-Bit Server VM java.class.path: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000004:/content/hadoop-3.4.0/etc/hadoop:/content/hadoop-3.4.0/share/hadoop/common/hadoop-kms-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/objenesis-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jna-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/codemodel-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-tree-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/fst-2.50.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-commons-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/mockito-core-2.28.2.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000004/job.jar java.io.tmpdir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000004/tmp user.dir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000004 user.name: root ************************************************************/ 2024-08-04 19:14:43,083 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 2024-08-04 19:14:44,501 INFO [main] org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:14:44,503 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:14:44,503 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:14:44,574 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:14:45,146 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://localhost:8902/user/root/QuasiMonteCarlo_1722798838891_1650860168/in/part2:0+118 2024-08-04 19:14:45,350 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:14:45,350 INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:14:45,350 INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 83886080 2024-08-04 19:14:45,350 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:14:45,350 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:14:45,390 INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:14:45,530 INFO [main] org.apache.hadoop.mapred.MapTask: Starting flush of map output 2024-08-04 19:14:45,530 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling map output 2024-08-04 19:14:45,530 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:14:45,530 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:14:45,541 INFO [main] org.apache.hadoop.mapred.MapTask: Finished spill 0 2024-08-04 19:14:45,602 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1722798834650_0001_m_000002_0 is done. And is in the process of committing 2024-08-04 19:14:45,640 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1722798834650_0001_m_000002_0' done. 2024-08-04 19:14:45,661 INFO [main] org.apache.hadoop.mapred.Task: Final Counters for attempt_1722798834650_0001_m_000002_0: Counters: 28 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=309460 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=264 HDFS: Number of bytes written=0 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 HDFS: Number of bytes read erasure-coded=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=146 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=74 CPU time spent (ms)=920 Physical memory (bytes) snapshot=327950336 Virtual memory (bytes) snapshot=2719514624 Total committed heap usage (bytes)=216006656 Peak Map Physical memory (bytes)=327950336 Peak Map Virtual memory (bytes)=2719514624 File Input Format Counters Bytes Read=118 2024-08-04 19:14:45,663 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system... 2024-08-04 19:14:45,663 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped. 2024-08-04 19:14:45,664 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete. End of LogType:syslog *********************************************************************** Container: container_1722798834650_0001_01_000005 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:directory.info LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:2101 LogContents: ls -l: total 32 -rw-r--r-- 1 root root 129 Aug 4 19:14 container_tokens -rwx------ 1 root root 964 Aug 4 19:14 default_container_executor_session.sh -rwx------ 1 root root 1019 Aug 4 19:14 default_container_executor.sh lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.jar -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.xml -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml -rwx------ 1 root root 8159 Aug 4 19:14 launch_container.sh drwx--x--- 2 root root 4096 Aug 4 19:14 tmp find -L . -maxdepth 5 -ls: 809830 4 drwx--x--- 3 root root 4096 Aug 4 19:14 . 809860 4 -rwx------ 1 root root 1019 Aug 4 19:14 ./default_container_executor.sh 809751 260 -r-x------ 1 root root 264361 Aug 4 19:14 ./job.xml 809845 4 -rw-r--r-- 1 root root 129 Aug 4 19:14 ./container_tokens 809857 4 -rwx------ 1 root root 964 Aug 4 19:14 ./default_container_executor_session.sh 809851 8 -rwx------ 1 root root 8159 Aug 4 19:14 ./launch_container.sh 809864 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor.sh.crc 809852 4 -rw-r--r-- 1 root root 72 Aug 4 19:14 ./.launch_container.sh.crc 809745 4 drwx------ 2 root root 4096 Aug 4 19:14 ./job.jar 809746 276 -r-x------ 1 root root 281609 Aug 4 19:14 ./job.jar/job.jar 809846 4 -rw-r--r-- 1 root root 12 Aug 4 19:14 ./.container_tokens.crc 809858 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor_session.sh.crc 809843 4 drwx--x--- 2 root root 4096 Aug 4 19:14 ./tmp broken symlinks(find -L . -maxdepth 5 -type l -ls): End of LogType:directory.info ******************************************************************************* Container: container_1722798834650_0001_01_000005 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:launch_container.sh LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:8159 LogContents: #!/bin/bash set -o pipefail -e export PRELAUNCH_OUT="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/prelaunch.out" exec >"${PRELAUNCH_OUT}" export PRELAUNCH_ERR="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/prelaunch.err" exec 2>"${PRELAUNCH_ERR}" echo "Setting up env variables" export JAVA_HOME=${JAVA_HOME:-"/usr/lib/jvm/java-11-openjdk-amd64"} export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-"/content/hadoop-3.4.0"} export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/content/hadoop-3.4.0/etc/hadoop"} export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HOME=${HADOOP_HOME:-"/content/hadoop-3.4.0"} export PATH=${PATH:-"/content/hadoop-3.4.0/bin:/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin"} export LANG=${LANG:-"en_US.UTF-8"} export HADOOP_TOKEN_FILE_LOCATION="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000005/container_tokens" export CONTAINER_ID="container_1722798834650_0001_01_000005" export NM_PORT="34535" export NM_HOST="localhost" export NM_HTTP_PORT="37361" export LOCAL_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001" export LOCAL_USER_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/" export LOG_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000005,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000005,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000005,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005" export USER="root" export LOGNAME="root" export HOME="/home/" export PWD="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000005" export LOCALIZATION_COUNTERS="0,548046,0,2,6" export JVM_PID="$$" export NM_AUX_SERVICE_mapreduce_shuffle="AACrRwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" export STDOUT_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/stdout" export SHELL="/bin/bash" export HADOOP_ROOT_LOGGER="INFO,console" export CLASSPATH="$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/*:job.jar/classes/:job.jar/lib/*:$PWD/*" export LD_LIBRARY_PATH="$PWD:$HADOOP_COMMON_HOME/lib/native" export STDERR_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/stderr" export HADOOP_CLIENT_OPTS="" export MALLOC_ARENA_MAX="4" echo "Setting up job resources" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml" "job.xml" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar" "job.jar" echo "Copying debugging information" # Creating copy of launch script cp "launch_container.sh" "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/launch_container.sh" chmod 640 "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/launch_container.sh" # Determining directory contents echo "ls -l:" 1>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/directory.info" ls -l 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/directory.info" echo "find -L . -maxdepth 5 -ls:" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/directory.info" find -L . -maxdepth 5 -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/directory.info" echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/directory.info" find -L . -maxdepth 5 -type l -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/directory.info" echo "Launching container" exec /bin/bash -c "$JAVA_HOME/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=$PWD/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 40041 attempt_1722798834650_0001_m_000003_0 5 1>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/stdout 2>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000005/stderr " End of LogType:launch_container.sh ************************************************************************************ Container: container_1722798834650_0001_01_000005 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.err LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:prelaunch.err ****************************************************************************** Container: container_1722798834650_0001_01_000005 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.out LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:100 LogContents: Setting up env variables Setting up job resources Copying debugging information Launching container End of LogType:prelaunch.out ****************************************************************************** Container: container_1722798834650_0001_01_000005 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stderr LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stderr *********************************************************************** Container: container_1722798834650_0001_01_000005 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stdout LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stdout *********************************************************************** Container: container_1722798834650_0001_01_000005 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:syslog LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:31510 LogContents: 2024-08-04 19:14:39,501 INFO [main] org.apache.hadoop.security.SecurityUtil: Updating Configuration 2024-08-04 19:14:39,908 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2024-08-04 19:14:40,230 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2024-08-04 19:14:40,230 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started 2024-08-04 19:14:40,517 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens: [Kind: mapreduce.job, Service: job_1722798834650_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@70f02c32)] 2024-08-04 19:14:40,663 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now. 2024-08-04 19:14:41,380 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001 2024-08-04 19:14:42,874 INFO [main] org.apache.hadoop.mapred.YarnChild: /************************************************************ [system properties] os.name: Linux os.version: 6.1.85+ java.home: /usr/lib/jvm/java-11-openjdk-amd64 java.runtime.version: 11.0.24+8-post-Ubuntu-1ubuntu322.04 java.vendor: Ubuntu java.version: 11.0.24 java.vm.name: OpenJDK 64-Bit Server VM java.class.path: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000005:/content/hadoop-3.4.0/etc/hadoop:/content/hadoop-3.4.0/share/hadoop/common/hadoop-kms-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/objenesis-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jna-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/codemodel-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-tree-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/fst-2.50.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-commons-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/mockito-core-2.28.2.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000005/job.jar java.io.tmpdir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000005/tmp user.dir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000005 user.name: root ************************************************************/ 2024-08-04 19:14:42,875 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 2024-08-04 19:14:44,317 INFO [main] org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:14:44,319 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:14:44,319 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:14:44,410 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:14:45,042 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://localhost:8902/user/root/QuasiMonteCarlo_1722798838891_1650860168/in/part3:0+118 2024-08-04 19:14:45,245 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:14:45,245 INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:14:45,245 INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 83886080 2024-08-04 19:14:45,245 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:14:45,245 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:14:45,261 INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:14:45,348 INFO [main] org.apache.hadoop.mapred.MapTask: Starting flush of map output 2024-08-04 19:14:45,348 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling map output 2024-08-04 19:14:45,352 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:14:45,352 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:14:45,372 INFO [main] org.apache.hadoop.mapred.MapTask: Finished spill 0 2024-08-04 19:14:45,441 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1722798834650_0001_m_000003_0 is done. And is in the process of committing 2024-08-04 19:14:45,493 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1722798834650_0001_m_000003_0' done. 2024-08-04 19:14:45,517 INFO [main] org.apache.hadoop.mapred.Task: Final Counters for attempt_1722798834650_0001_m_000003_0: Counters: 28 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=309460 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=264 HDFS: Number of bytes written=0 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 HDFS: Number of bytes read erasure-coded=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=146 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=80 CPU time spent (ms)=900 Physical memory (bytes) snapshot=311812096 Virtual memory (bytes) snapshot=2722275328 Total committed heap usage (bytes)=216006656 Peak Map Physical memory (bytes)=311812096 Peak Map Virtual memory (bytes)=2722275328 File Input Format Counters Bytes Read=118 2024-08-04 19:14:45,519 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system... 2024-08-04 19:14:45,520 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped. 2024-08-04 19:14:45,520 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete. End of LogType:syslog *********************************************************************** Container: container_1722798834650_0001_01_000002 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:directory.info LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:2101 LogContents: ls -l: total 32 -rw-r--r-- 1 root root 129 Aug 4 19:14 container_tokens -rwx------ 1 root root 964 Aug 4 19:14 default_container_executor_session.sh -rwx------ 1 root root 1019 Aug 4 19:14 default_container_executor.sh lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.jar -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.xml -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml -rwx------ 1 root root 8159 Aug 4 19:14 launch_container.sh drwx--x--- 2 root root 4096 Aug 4 19:14 tmp find -L . -maxdepth 5 -ls: 809832 4 drwx--x--- 3 root root 4096 Aug 4 19:14 . 809849 4 -rwx------ 1 root root 1019 Aug 4 19:14 ./default_container_executor.sh 809751 260 -r-x------ 1 root root 264361 Aug 4 19:14 ./job.xml 809840 4 -rw-r--r-- 1 root root 129 Aug 4 19:14 ./container_tokens 809847 4 -rwx------ 1 root root 964 Aug 4 19:14 ./default_container_executor_session.sh 809845 8 -rwx------ 1 root root 8159 Aug 4 19:14 ./launch_container.sh 809851 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor.sh.crc 809846 4 -rw-r--r-- 1 root root 72 Aug 4 19:14 ./.launch_container.sh.crc 809745 4 drwx------ 2 root root 4096 Aug 4 19:14 ./job.jar 809746 276 -r-x------ 1 root root 281609 Aug 4 19:14 ./job.jar/job.jar 809841 4 -rw-r--r-- 1 root root 12 Aug 4 19:14 ./.container_tokens.crc 809848 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor_session.sh.crc 809839 4 drwx--x--- 2 root root 4096 Aug 4 19:14 ./tmp broken symlinks(find -L . -maxdepth 5 -type l -ls): End of LogType:directory.info ******************************************************************************* Container: container_1722798834650_0001_01_000002 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:launch_container.sh LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:8159 LogContents: #!/bin/bash set -o pipefail -e export PRELAUNCH_OUT="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/prelaunch.out" exec >"${PRELAUNCH_OUT}" export PRELAUNCH_ERR="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/prelaunch.err" exec 2>"${PRELAUNCH_ERR}" echo "Setting up env variables" export JAVA_HOME=${JAVA_HOME:-"/usr/lib/jvm/java-11-openjdk-amd64"} export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-"/content/hadoop-3.4.0"} export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/content/hadoop-3.4.0/etc/hadoop"} export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HOME=${HADOOP_HOME:-"/content/hadoop-3.4.0"} export PATH=${PATH:-"/content/hadoop-3.4.0/bin:/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin"} export LANG=${LANG:-"en_US.UTF-8"} export HADOOP_TOKEN_FILE_LOCATION="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000002/container_tokens" export CONTAINER_ID="container_1722798834650_0001_01_000002" export NM_PORT="34535" export NM_HOST="localhost" export NM_HTTP_PORT="37361" export LOCAL_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001" export LOCAL_USER_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/" export LOG_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000002,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000002,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000002" export USER="root" export LOGNAME="root" export HOME="/home/" export PWD="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000002" export LOCALIZATION_COUNTERS="0,548046,0,2,5" export JVM_PID="$$" export NM_AUX_SERVICE_mapreduce_shuffle="AACrRwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" export STDOUT_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/stdout" export SHELL="/bin/bash" export HADOOP_ROOT_LOGGER="INFO,console" export CLASSPATH="$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/*:job.jar/classes/:job.jar/lib/*:$PWD/*" export LD_LIBRARY_PATH="$PWD:$HADOOP_COMMON_HOME/lib/native" export STDERR_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/stderr" export HADOOP_CLIENT_OPTS="" export MALLOC_ARENA_MAX="4" echo "Setting up job resources" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml" "job.xml" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar" "job.jar" echo "Copying debugging information" # Creating copy of launch script cp "launch_container.sh" "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/launch_container.sh" chmod 640 "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/launch_container.sh" # Determining directory contents echo "ls -l:" 1>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/directory.info" ls -l 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/directory.info" echo "find -L . -maxdepth 5 -ls:" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/directory.info" find -L . -maxdepth 5 -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/directory.info" echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/directory.info" find -L . -maxdepth 5 -type l -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/directory.info" echo "Launching container" exec /bin/bash -c "$JAVA_HOME/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=$PWD/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 40041 attempt_1722798834650_0001_m_000000_0 2 1>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/stdout 2>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000002/stderr " End of LogType:launch_container.sh ************************************************************************************ Container: container_1722798834650_0001_01_000002 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.err LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:prelaunch.err ****************************************************************************** Container: container_1722798834650_0001_01_000002 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.out LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:100 LogContents: Setting up env variables Setting up job resources Copying debugging information Launching container End of LogType:prelaunch.out ****************************************************************************** Container: container_1722798834650_0001_01_000002 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stderr LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stderr *********************************************************************** Container: container_1722798834650_0001_01_000002 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stdout LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stdout *********************************************************************** Container: container_1722798834650_0001_01_000002 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:syslog LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:31510 LogContents: 2024-08-04 19:14:26,405 INFO [main] org.apache.hadoop.security.SecurityUtil: Updating Configuration 2024-08-04 19:14:26,656 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2024-08-04 19:14:27,095 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2024-08-04 19:14:27,095 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started 2024-08-04 19:14:27,421 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens: [Kind: mapreduce.job, Service: job_1722798834650_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@70f02c32)] 2024-08-04 19:14:27,614 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now. 2024-08-04 19:14:28,578 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001 2024-08-04 19:14:30,299 INFO [main] org.apache.hadoop.mapred.YarnChild: /************************************************************ [system properties] os.name: Linux os.version: 6.1.85+ java.home: /usr/lib/jvm/java-11-openjdk-amd64 java.runtime.version: 11.0.24+8-post-Ubuntu-1ubuntu322.04 java.vendor: Ubuntu java.version: 11.0.24 java.vm.name: OpenJDK 64-Bit Server VM java.class.path: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000002:/content/hadoop-3.4.0/etc/hadoop:/content/hadoop-3.4.0/share/hadoop/common/hadoop-kms-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/objenesis-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jna-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/codemodel-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-tree-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/fst-2.50.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-commons-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/mockito-core-2.28.2.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000002/job.jar java.io.tmpdir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000002/tmp user.dir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000002 user.name: root ************************************************************/ 2024-08-04 19:14:30,300 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 2024-08-04 19:14:31,952 INFO [main] org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:14:31,957 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:14:31,958 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:14:32,045 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:14:32,998 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://localhost:8902/user/root/QuasiMonteCarlo_1722798838891_1650860168/in/part0:0+118 2024-08-04 19:14:33,519 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:14:33,520 INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:14:33,520 INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 83886080 2024-08-04 19:14:33,520 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:14:33,520 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:14:33,562 INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:14:33,974 INFO [main] org.apache.hadoop.mapred.MapTask: Starting flush of map output 2024-08-04 19:14:33,974 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling map output 2024-08-04 19:14:33,974 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:14:33,974 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:14:34,008 INFO [main] org.apache.hadoop.mapred.MapTask: Finished spill 0 2024-08-04 19:14:34,066 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1722798834650_0001_m_000000_0 is done. And is in the process of committing 2024-08-04 19:14:34,129 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1722798834650_0001_m_000000_0' done. 2024-08-04 19:14:34,161 INFO [main] org.apache.hadoop.mapred.Task: Final Counters for attempt_1722798834650_0001_m_000000_0: Counters: 28 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=309460 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=264 HDFS: Number of bytes written=0 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 HDFS: Number of bytes read erasure-coded=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=146 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=88 CPU time spent (ms)=880 Physical memory (bytes) snapshot=331776000 Virtual memory (bytes) snapshot=2719916032 Total committed heap usage (bytes)=216006656 Peak Map Physical memory (bytes)=331776000 Peak Map Virtual memory (bytes)=2719916032 File Input Format Counters Bytes Read=118 2024-08-04 19:14:34,163 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system... 2024-08-04 19:14:34,164 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped. 2024-08-04 19:14:34,164 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete. End of LogType:syslog *********************************************************************** Container: container_1722798834650_0001_01_000003 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:directory.info LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:2101 LogContents: ls -l: total 32 -rw-r--r-- 1 root root 129 Aug 4 19:14 container_tokens -rwx------ 1 root root 964 Aug 4 19:14 default_container_executor_session.sh -rwx------ 1 root root 1019 Aug 4 19:14 default_container_executor.sh lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.jar -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.xml -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml -rwx------ 1 root root 8160 Aug 4 19:14 launch_container.sh drwx--x--- 2 root root 4096 Aug 4 19:14 tmp find -L . -maxdepth 5 -ls: 809809 4 drwx--x--- 3 root root 4096 Aug 4 19:14 . 809824 4 -rwx------ 1 root root 1019 Aug 4 19:14 ./default_container_executor.sh 809751 260 -r-x------ 1 root root 264361 Aug 4 19:14 ./job.xml 809816 4 -rw-r--r-- 1 root root 129 Aug 4 19:14 ./container_tokens 809822 4 -rwx------ 1 root root 964 Aug 4 19:14 ./default_container_executor_session.sh 809820 8 -rwx------ 1 root root 8160 Aug 4 19:14 ./launch_container.sh 809825 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor.sh.crc 809821 4 -rw-r--r-- 1 root root 72 Aug 4 19:14 ./.launch_container.sh.crc 809745 4 drwx------ 2 root root 4096 Aug 4 19:14 ./job.jar 809746 276 -r-x------ 1 root root 281609 Aug 4 19:14 ./job.jar/job.jar 809817 4 -rw-r--r-- 1 root root 12 Aug 4 19:14 ./.container_tokens.crc 809823 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor_session.sh.crc 809815 4 drwx--x--- 2 root root 4096 Aug 4 19:14 ./tmp broken symlinks(find -L . -maxdepth 5 -type l -ls): End of LogType:directory.info ******************************************************************************* Container: container_1722798834650_0001_01_000003 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:launch_container.sh LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:8160 LogContents: #!/bin/bash set -o pipefail -e export PRELAUNCH_OUT="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/prelaunch.out" exec >"${PRELAUNCH_OUT}" export PRELAUNCH_ERR="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/prelaunch.err" exec 2>"${PRELAUNCH_ERR}" echo "Setting up env variables" export JAVA_HOME=${JAVA_HOME:-"/usr/lib/jvm/java-11-openjdk-amd64"} export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-"/content/hadoop-3.4.0"} export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/content/hadoop-3.4.0/etc/hadoop"} export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HOME=${HADOOP_HOME:-"/content/hadoop-3.4.0"} export PATH=${PATH:-"/content/hadoop-3.4.0/bin:/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin"} export LANG=${LANG:-"en_US.UTF-8"} export HADOOP_TOKEN_FILE_LOCATION="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000003/container_tokens" export CONTAINER_ID="container_1722798834650_0001_01_000003" export NM_PORT="34535" export NM_HOST="localhost" export NM_HTTP_PORT="37361" export LOCAL_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001" export LOCAL_USER_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/" export LOG_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000003,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000003,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000003" export USER="root" export LOGNAME="root" export HOME="/home/" export PWD="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000003" export LOCALIZATION_COUNTERS="0,548046,0,2,26" export JVM_PID="$$" export NM_AUX_SERVICE_mapreduce_shuffle="AACrRwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" export STDOUT_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/stdout" export SHELL="/bin/bash" export HADOOP_ROOT_LOGGER="INFO,console" export CLASSPATH="$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/*:job.jar/classes/:job.jar/lib/*:$PWD/*" export LD_LIBRARY_PATH="$PWD:$HADOOP_COMMON_HOME/lib/native" export STDERR_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/stderr" export HADOOP_CLIENT_OPTS="" export MALLOC_ARENA_MAX="4" echo "Setting up job resources" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml" "job.xml" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar" "job.jar" echo "Copying debugging information" # Creating copy of launch script cp "launch_container.sh" "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/launch_container.sh" chmod 640 "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/launch_container.sh" # Determining directory contents echo "ls -l:" 1>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/directory.info" ls -l 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/directory.info" echo "find -L . -maxdepth 5 -ls:" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/directory.info" find -L . -maxdepth 5 -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/directory.info" echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/directory.info" find -L . -maxdepth 5 -type l -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/directory.info" echo "Launching container" exec /bin/bash -c "$JAVA_HOME/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=$PWD/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 40041 attempt_1722798834650_0001_m_000001_0 3 1>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/stdout 2>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000003/stderr " End of LogType:launch_container.sh ************************************************************************************ Container: container_1722798834650_0001_01_000003 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.err LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:prelaunch.err ****************************************************************************** Container: container_1722798834650_0001_01_000003 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.out LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:100 LogContents: Setting up env variables Setting up job resources Copying debugging information Launching container End of LogType:prelaunch.out ****************************************************************************** Container: container_1722798834650_0001_01_000003 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stderr LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stderr *********************************************************************** Container: container_1722798834650_0001_01_000003 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stdout LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stdout *********************************************************************** Container: container_1722798834650_0001_01_000003 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:syslog LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:31510 LogContents: 2024-08-04 19:14:26,096 INFO [main] org.apache.hadoop.security.SecurityUtil: Updating Configuration 2024-08-04 19:14:26,500 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2024-08-04 19:14:26,997 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2024-08-04 19:14:26,998 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started 2024-08-04 19:14:27,330 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens: [Kind: mapreduce.job, Service: job_1722798834650_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@1fa1cab1)] 2024-08-04 19:14:27,429 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now. 2024-08-04 19:14:28,423 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001 2024-08-04 19:14:30,060 INFO [main] org.apache.hadoop.mapred.YarnChild: /************************************************************ [system properties] os.name: Linux os.version: 6.1.85+ java.home: /usr/lib/jvm/java-11-openjdk-amd64 java.runtime.version: 11.0.24+8-post-Ubuntu-1ubuntu322.04 java.vendor: Ubuntu java.version: 11.0.24 java.vm.name: OpenJDK 64-Bit Server VM java.class.path: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000003:/content/hadoop-3.4.0/etc/hadoop:/content/hadoop-3.4.0/share/hadoop/common/hadoop-kms-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/objenesis-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jna-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/codemodel-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-tree-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/fst-2.50.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-commons-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/mockito-core-2.28.2.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000003/job.jar java.io.tmpdir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000003/tmp user.dir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000003 user.name: root ************************************************************/ 2024-08-04 19:14:30,111 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 2024-08-04 19:14:31,735 INFO [main] org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:14:31,737 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:14:31,737 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:14:31,857 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:14:32,725 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://localhost:8902/user/root/QuasiMonteCarlo_1722798838891_1650860168/in/part1:0+118 2024-08-04 19:14:33,224 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:14:33,225 INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:14:33,225 INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 83886080 2024-08-04 19:14:33,225 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:14:33,225 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:14:33,266 INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:14:33,436 INFO [main] org.apache.hadoop.mapred.MapTask: Starting flush of map output 2024-08-04 19:14:33,436 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling map output 2024-08-04 19:14:33,436 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:14:33,436 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:14:33,459 INFO [main] org.apache.hadoop.mapred.MapTask: Finished spill 0 2024-08-04 19:14:33,555 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1722798834650_0001_m_000001_0 is done. And is in the process of committing 2024-08-04 19:14:33,652 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1722798834650_0001_m_000001_0' done. 2024-08-04 19:14:33,767 INFO [main] org.apache.hadoop.mapred.Task: Final Counters for attempt_1722798834650_0001_m_000001_0: Counters: 28 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=309460 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=264 HDFS: Number of bytes written=0 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 HDFS: Number of bytes read erasure-coded=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=146 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=80 CPU time spent (ms)=980 Physical memory (bytes) snapshot=383688704 Virtual memory (bytes) snapshot=2723983360 Total committed heap usage (bytes)=298844160 Peak Map Physical memory (bytes)=383688704 Peak Map Virtual memory (bytes)=2723983360 File Input Format Counters Bytes Read=118 2024-08-04 19:14:33,771 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system... 2024-08-04 19:14:33,771 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped. 2024-08-04 19:14:33,772 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete. End of LogType:syslog *********************************************************************** Container: container_1722798834650_0001_01_000001 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:directory.info LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:2444 LogContents: ls -l: total 36 -rw-r--r-- 1 root root 105 Aug 4 19:14 container_tokens -rwx------ 1 root root 964 Aug 4 19:14 default_container_executor_session.sh -rwx------ 1 root root 1019 Aug 4 19:14 default_container_executor.sh lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.jar -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar drwxr-xr-x 2 root root 4096 Aug 4 19:14 jobSubmitDir lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.xml -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml -rwx------ 1 root root 8184 Aug 4 19:14 launch_container.sh drwx--x--- 2 root root 4096 Aug 4 19:14 tmp find -L . -maxdepth 5 -ls: 809763 4 drwx--x--- 4 root root 4096 Aug 4 19:14 . 809778 4 -rwx------ 1 root root 1019 Aug 4 19:14 ./default_container_executor.sh 809751 260 -r-x------ 1 root root 264361 Aug 4 19:14 ./job.xml 809772 4 -rw-r--r-- 1 root root 105 Aug 4 19:14 ./container_tokens 809776 4 -rwx------ 1 root root 964 Aug 4 19:14 ./default_container_executor_session.sh 809774 8 -rwx------ 1 root root 8184 Aug 4 19:14 ./launch_container.sh 809779 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor.sh.crc 809775 4 -rw-r--r-- 1 root root 72 Aug 4 19:14 ./.launch_container.sh.crc 809745 4 drwx------ 2 root root 4096 Aug 4 19:14 ./job.jar 809746 276 -r-x------ 1 root root 281609 Aug 4 19:14 ./job.jar/job.jar 809773 4 -rw-r--r-- 1 root root 12 Aug 4 19:14 ./.container_tokens.crc 809784 4 drwxr-xr-x 2 root root 4096 Aug 4 19:14 ./jobSubmitDir 809748 4 -r-x------ 1 root root 737 Aug 4 19:14 ./jobSubmitDir/job.split 809738 4 -r-x------ 1 root root 82 Aug 4 19:14 ./jobSubmitDir/job.splitmetainfo 809777 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor_session.sh.crc 809771 4 drwx--x--- 2 root root 4096 Aug 4 19:14 ./tmp broken symlinks(find -L . -maxdepth 5 -type l -ls): End of LogType:directory.info ******************************************************************************* Container: container_1722798834650_0001_01_000001 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:launch_container.sh LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:8184 LogContents: #!/bin/bash set -o pipefail -e export PRELAUNCH_OUT="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/prelaunch.out" exec >"${PRELAUNCH_OUT}" export PRELAUNCH_ERR="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/prelaunch.err" exec 2>"${PRELAUNCH_ERR}" echo "Setting up env variables" export JAVA_HOME=${JAVA_HOME:-"/usr/lib/jvm/java-11-openjdk-amd64"} export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-"/content/hadoop-3.4.0"} export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/content/hadoop-3.4.0/etc/hadoop"} export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HOME=${HADOOP_HOME:-"/content/hadoop-3.4.0"} export PATH=${PATH:-"/content/hadoop-3.4.0/bin:/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin"} export LANG=${LANG:-"en_US.UTF-8"} export HADOOP_TOKEN_FILE_LOCATION="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000001/container_tokens" export CONTAINER_ID="container_1722798834650_0001_01_000001" export NM_PORT="34535" export NM_HOST="localhost" export NM_HTTP_PORT="37361" export LOCAL_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001" export LOCAL_USER_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/" export LOG_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000001" export USER="root" export LOGNAME="root" export HOME="/home/" export PWD="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000001" export LOCALIZATION_COUNTERS="548893,0,4,0,818" export JVM_PID="$$" export NM_AUX_SERVICE_mapreduce_shuffle="AACrRwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" export APPLICATION_WEB_PROXY_BASE="/proxy/application_1722798834650_0001" export SHELL="/bin/bash" export CLASSPATH="$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/*:job.jar/classes/:job.jar/lib/*:$PWD/*" export APP_SUBMIT_TIME_ENV="1722798847160" export LD_LIBRARY_PATH="$PWD:$HADOOP_COMMON_HOME/lib/native" export MALLOC_ARENA_MAX="4" echo "Setting up job resources" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml" "job.xml" mkdir -p jobSubmitDir ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001/filecache/12/job.split" "jobSubmitDir/job.split" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar" "job.jar" mkdir -p jobSubmitDir ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/filecache/10/job.splitmetainfo" "jobSubmitDir/job.splitmetainfo" echo "Copying debugging information" # Creating copy of launch script cp "launch_container.sh" "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/launch_container.sh" chmod 640 "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/launch_container.sh" # Determining directory contents echo "ls -l:" 1>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/directory.info" ls -l 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/directory.info" echo "find -L . -maxdepth 5 -ls:" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/directory.info" find -L . -maxdepth 5 -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/directory.info" echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/directory.info" find -L . -maxdepth 5 -type l -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/directory.info" echo "Launching container" exec /bin/bash -c "$JAVA_HOME/bin/java -Djava.io.tmpdir=$PWD/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/stdout 2>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000001/stderr " End of LogType:launch_container.sh ************************************************************************************ Container: container_1722798834650_0001_01_000001 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.err LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:prelaunch.err ****************************************************************************** Container: container_1722798834650_0001_01_000001 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.out LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:100 LogContents: Setting up env variables Setting up job resources Copying debugging information Launching container End of LogType:prelaunch.out ****************************************************************************** Container: container_1722798834650_0001_01_000001 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stderr LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:2344 LogContents: WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.google.inject.internal.cglib.core.$ReflectUtils$1 (file:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-4.2.3.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) WARNING: Please consider reporting this to the maintainers of com.google.inject.internal.cglib.core.$ReflectUtils$1 WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Aug 04, 2024 7:14:17 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class Aug 04, 2024 7:14:17 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Aug 04, 2024 7:14:17 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class Aug 04, 2024 7:14:17 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.19.4 05/24/2017 03:20 PM' Aug 04, 2024 7:14:17 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Aug 04, 2024 7:14:18 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Aug 04, 2024 7:14:19 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest" log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. End of LogType:stderr *********************************************************************** Container: container_1722798834650_0001_01_000001 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stdout LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stdout *********************************************************************** Container: container_1722798834650_0001_01_000001 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:syslog LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:97284 LogContents: 2024-08-04 19:14:12,495 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1722798834650_0001_000001 2024-08-04 19:14:12,717 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: /************************************************************ [system properties] os.name: Linux os.version: 6.1.85+ java.home: /usr/lib/jvm/java-11-openjdk-amd64 java.runtime.version: 11.0.24+8-post-Ubuntu-1ubuntu322.04 java.vendor: Ubuntu java.version: 11.0.24 java.vm.name: OpenJDK 64-Bit Server VM java.class.path: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000001:/content/hadoop-3.4.0/etc/hadoop:/content/hadoop-3.4.0/share/hadoop/common/hadoop-kms-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/objenesis-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jna-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/codemodel-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-tree-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/fst-2.50.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-commons-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/mockito-core-2.28.2.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000001/job.jar java.io.tmpdir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000001/tmp user.dir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000001 user.name: root ************************************************************/ 2024-08-04 19:14:12,821 INFO [main] org.apache.hadoop.security.SecurityUtil: Updating Configuration 2024-08-04 19:14:13,142 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens: [Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1722798834650 } attemptId: 1 } keyId: -675888397)] 2024-08-04 19:14:13,219 INFO [main] org.apache.hadoop.conf.Configuration: resource-types.xml not found 2024-08-04 19:14:13,222 INFO [main] org.apache.hadoop.yarn.util.resource.ResourceUtils: Unable to find 'resource-types.xml'. 2024-08-04 19:14:13,246 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter. 2024-08-04 19:14:13,249 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null 2024-08-04 19:14:13,350 INFO [main] org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:14:13,352 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:14:13,352 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:14:14,156 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2024-08-04 19:14:14,475 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler 2024-08-04 19:14:14,477 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher 2024-08-04 19:14:14,478 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher 2024-08-04 19:14:14,480 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher 2024-08-04 19:14:14,480 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler 2024-08-04 19:14:14,482 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher 2024-08-04 19:14:14,482 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter 2024-08-04 19:14:14,484 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter 2024-08-04 19:14:14,532 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://localhost:8902] 2024-08-04 19:14:14,562 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://localhost:8902] 2024-08-04 19:14:14,608 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://localhost:8902] 2024-08-04 19:14:14,620 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Creating intermediate history logDir: [hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate] + based on conf. Should ideally be created by the JobHistoryServer: yarn.app.mapreduce.am.create-intermediate-jh-base-dir 2024-08-04 19:14:14,635 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after creating 493, Expected: 1023 2024-08-04 19:14:14,635 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly setting permissions to : 1023, rwxrwxrwt 2024-08-04 19:14:14,659 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after creating 488, Expected: 504 2024-08-04 19:14:14,659 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Explicitly setting permissions to : 504, rwxrwx--- 2024-08-04 19:14:14,663 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled 2024-08-04 19:14:14,730 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler 2024-08-04 19:14:15,157 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2024-08-04 19:14:15,287 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2024-08-04 19:14:15,287 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started 2024-08-04 19:14:15,303 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1722798834650_0001 to jobTokenSecretManager 2024-08-04 19:14:15,550 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1722798834650_0001 because: not enabled; 2024-08-04 19:14:15,581 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1722798834650_0001 = 590. Number of splits = 5 2024-08-04 19:14:15,583 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1722798834650_0001 = 1 2024-08-04 19:14:15,583 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1722798834650_0001Job Transitioned from NEW to INITED 2024-08-04 19:14:15,585 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1722798834650_0001. 2024-08-04 19:14:15,656 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 100, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false, ipcFailOver: false. 2024-08-04 19:14:15,663 INFO [main] org.apache.hadoop.ipc.Server: Listener at 0.0.0.0:34269 2024-08-04 19:14:15,665 INFO [Socket Reader #1 for port 0] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 0 2024-08-04 19:14:15,719 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server 2024-08-04 19:14:15,720 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2024-08-04 19:14:15,722 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at localhost/127.0.0.1:34269 2024-08-04 19:14:15,723 INFO [IPC Server listener on 0] org.apache.hadoop.ipc.Server: IPC Server listener on 0: starting 2024-08-04 19:14:15,761 INFO [main] org.eclipse.jetty.util.log: Logging initialized @4949ms to org.eclipse.jetty.util.log.Slf4jLog 2024-08-04 19:14:15,894 WARN [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /root/hadoop-http-auth-signature-secret 2024-08-04 19:14:15,954 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2024-08-04 19:14:16,010 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce 2024-08-04 19:14:16,011 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static 2024-08-04 19:14:16,017 INFO [main] org.apache.hadoop.http.HttpServer2: ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2024-08-04 19:14:16,782 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules 2024-08-04 19:14:16,784 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 38235 2024-08-04 19:14:16,786 INFO [main] org.eclipse.jetty.server.Server: jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.24+8-post-Ubuntu-1ubuntu322.04 2024-08-04 19:14:16,825 INFO [main] org.eclipse.jetty.server.session: DefaultSessionIdManager workerName=node0 2024-08-04 19:14:16,826 INFO [main] org.eclipse.jetty.server.session: No SessionScavenger set, using defaults 2024-08-04 19:14:16,827 INFO [main] org.eclipse.jetty.server.session: node0 Scavenging every 660000ms 2024-08-04 19:14:16,845 INFO [main] org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@48a663e9{static,/static,jar:file:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar!/webapps/static,AVAILABLE} 2024-08-04 19:14:19,462 INFO [main] org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@7a1ddbf1{mapreduce,/,file:///content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000001/tmp/jetty-0_0_0_0-38235-hadoop-yarn-common-3_4_0_jar-_-any-17571332027292415163/webapp/,AVAILABLE}{jar:file:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar!/webapps/mapreduce} 2024-08-04 19:14:19,487 INFO [main] org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@58fbd02e{HTTP/1.1, (http/1.1)}{0.0.0.0:38235} 2024-08-04 19:14:19,489 INFO [main] org.eclipse.jetty.server.Server: Started @8678ms 2024-08-04 19:14:19,502 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app mapreduce started at 38235 2024-08-04 19:14:19,543 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 3000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false, ipcFailOver: false. 2024-08-04 19:14:19,543 INFO [main] org.apache.hadoop.ipc.Server: Listener at 0.0.0.0:40041 2024-08-04 19:14:19,544 INFO [Socket Reader #1 for port 0] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 0 2024-08-04 19:14:19,579 INFO [IPC Server listener on 0] org.apache.hadoop.ipc.Server: IPC Server listener on 0: starting 2024-08-04 19:14:19,579 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2024-08-04 19:14:19,667 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true 2024-08-04 19:14:19,667 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3 2024-08-04 19:14:19,667 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33 2024-08-04 19:14:19,678 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: 0% of the mappers will be scheduled using OPPORTUNISTIC containers 2024-08-04 19:14:19,770 INFO [main] org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8030 2024-08-04 19:14:20,077 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: maxContainerCapability: <memory:4096, vCores:4> 2024-08-04 19:14:20,077 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: queue: root.default 2024-08-04 19:14:20,093 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500 2024-08-04 19:14:20,093 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10 2024-08-04 19:14:20,169 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1722798834650_0001Job Transitioned from INITED to SETUP 2024-08-04 19:14:20,174 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP 2024-08-04 19:14:20,205 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1722798834650_0001Job Transitioned from SETUP to RUNNING 2024-08-04 19:14:20,448 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Resource capability of task type MAP is set to <memory:1024, vCores:1> 2024-08-04 19:14:20,523 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000000 Task Transitioned from NEW to SCHEDULED 2024-08-04 19:14:20,530 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000001 Task Transitioned from NEW to SCHEDULED 2024-08-04 19:14:20,536 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000002 Task Transitioned from NEW to SCHEDULED 2024-08-04 19:14:20,536 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000003 Task Transitioned from NEW to SCHEDULED 2024-08-04 19:14:20,536 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000004 Task Transitioned from NEW to SCHEDULED 2024-08-04 19:14:20,551 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Resource capability of task type REDUCE is set to <memory:1024, vCores:1> 2024-08-04 19:14:20,551 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_r_000000 Task Transitioned from NEW to SCHEDULED 2024-08-04 19:14:20,571 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED 2024-08-04 19:14:20,571 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000001_0 TaskAttempt Transitioned from NEW to UNASSIGNED 2024-08-04 19:14:20,571 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000002_0 TaskAttempt Transitioned from NEW to UNASSIGNED 2024-08-04 19:14:20,571 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000003_0 TaskAttempt Transitioned from NEW to UNASSIGNED 2024-08-04 19:14:20,571 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000004_0 TaskAttempt Transitioned from NEW to UNASSIGNED 2024-08-04 19:14:20,571 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_r_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED 2024-08-04 19:14:20,573 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1722798834650_0001, File: hdfs://localhost:8902/tmp/hadoop-yarn/staging/root/.staging/job_1722798834650_0001/job_1722798834650_0001_1.jhist 2024-08-04 19:14:20,584 INFO [Thread-57] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:1024, vCores:1> 2024-08-04 19:14:20,726 INFO [Thread-57] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: reduceResourceRequest:<memory:1024, vCores:1> 2024-08-04 19:14:21,107 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1 ScheduledMaps:5 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0 2024-08-04 19:14:21,633 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: applicationId=application_1722798834650_0001: ask=3 release=0 newContainers=0 finishedContainers=0 resourceLimit=<memory:2048, vCores:7> knownNMs=1 2024-08-04 19:14:21,652 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:2048, vCores:7> 2024-08-04 19:14:21,653 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:22,677 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 2 2024-08-04 19:14:22,680 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1722798834650_0001_01_000002 to attempt_1722798834650_0001_m_000000_0 2024-08-04 19:14:22,686 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1722798834650_0001_01_000003 to attempt_1722798834650_0001_m_000001_0 2024-08-04 19:14:22,689 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:22,689 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:22,689 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:3 ScheduledReds:0 AssignedMaps:2 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:2 RackLocal:0 2024-08-04 19:14:22,802 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar file on the remote FS is hdfs://localhost:8902/tmp/hadoop-yarn/staging/root/.staging/job_1722798834650_0001/job.jar 2024-08-04 19:14:22,813 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /tmp/hadoop-yarn/staging/root/.staging/job_1722798834650_0001/job.xml 2024-08-04 19:14:22,816 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0 tokens and #1 secret keys for NM use for launching container 2024-08-04 19:14:22,816 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 1 2024-08-04 19:14:22,816 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData 2024-08-04 19:14:22,897 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m 2024-08-04 19:14:22,901 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED 2024-08-04 19:14:22,912 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m 2024-08-04 19:14:22,913 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000001_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED 2024-08-04 19:14:22,915 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1722798834650_0001_01_000002 taskAttempt attempt_1722798834650_0001_m_000000_0 2024-08-04 19:14:22,918 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1722798834650_0001_m_000000_0 2024-08-04 19:14:22,922 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1722798834650_0001_01_000003 taskAttempt attempt_1722798834650_0001_m_000001_0 2024-08-04 19:14:22,922 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1722798834650_0001_m_000001_0 2024-08-04 19:14:23,109 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1722798834650_0001_m_000000_0 : 43847 2024-08-04 19:14:23,111 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1722798834650_0001_m_000000_0] using containerId: [container_1722798834650_0001_01_000002 on NM: [localhost:34535] 2024-08-04 19:14:23,116 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING 2024-08-04 19:14:23,118 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING 2024-08-04 19:14:23,144 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1722798834650_0001_m_000001_0 : 43847 2024-08-04 19:14:23,149 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1722798834650_0001_m_000001_0] using containerId: [container_1722798834650_0001_01_000003 on NM: [localhost:34535] 2024-08-04 19:14:23,149 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000001_0 TaskAttempt Transitioned from ASSIGNED to RUNNING 2024-08-04 19:14:23,149 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000001 Task Transitioned from SCHEDULED to RUNNING 2024-08-04 19:14:23,697 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: applicationId=application_1722798834650_0001: ask=3 release=0 newContainers=0 finishedContainers=0 resourceLimit=<memory:0, vCores:5> knownNMs=1 2024-08-04 19:14:23,698 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:23,698 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:24,706 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:24,706 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:25,714 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:25,715 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:26,723 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:26,723 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:27,732 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:27,732 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:27,816 INFO [Socket Reader #1 for port 0] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1722798834650_0001 (auth:SIMPLE) from localhost:34572 / 127.0.0.1:34572 2024-08-04 19:14:27,918 INFO [IPC Server handler 7 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1722798834650_0001_m_000003 asked for a task 2024-08-04 19:14:27,918 INFO [IPC Server handler 7 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1722798834650_0001_m_000003 given task: attempt_1722798834650_0001_m_000001_0 2024-08-04 19:14:28,136 INFO [Socket Reader #1 for port 0] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1722798834650_0001 (auth:SIMPLE) from localhost:34586 / 127.0.0.1:34586 2024-08-04 19:14:28,222 INFO [IPC Server handler 6 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1722798834650_0001_m_000002 asked for a task 2024-08-04 19:14:28,222 INFO [IPC Server handler 6 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1722798834650_0001_m_000002 given task: attempt_1722798834650_0001_m_000000_0 2024-08-04 19:14:28,739 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:28,739 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:29,744 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:29,744 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:30,750 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:30,750 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:31,757 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:31,757 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:32,763 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:32,764 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1 2024-08-04 19:14:33,414 INFO [IPC Server handler 4 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000001_0 is : 0.0 2024-08-04 19:14:33,632 INFO [IPC Server handler 22 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000001_0 is : 1.0 2024-08-04 19:14:33,651 INFO [IPC Server handler 7 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgment from attempt_1722798834650_0001_m_000001_0 2024-08-04 19:14:33,659 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000001_0 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER 2024-08-04 19:14:33,736 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1722798834650_0001_m_000001_0 2024-08-04 19:14:33,741 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000001 Task Transitioned from RUNNING to SUCCEEDED 2024-08-04 19:14:33,766 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1 2024-08-04 19:14:33,786 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:33,786 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold reached. Scheduling reduces. 2024-08-04 19:14:33,786 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.2 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1639, vCores:6> finalReduceResourceLimit:<memory:409, vCores:1> netScheduledMapResource:<memory:5120, vCores:5> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:33,786 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:3 ScheduledReds:0 AssignedMaps:2 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:2 RackLocal:0 2024-08-04 19:14:33,969 INFO [IPC Server handler 6 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000000_0 is : 0.0 2024-08-04 19:14:34,103 INFO [IPC Server handler 4 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000000_0 is : 1.0 2024-08-04 19:14:34,111 INFO [IPC Server handler 5 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgment from attempt_1722798834650_0001_m_000000_0 2024-08-04 19:14:34,112 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000000_0 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER 2024-08-04 19:14:34,113 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1722798834650_0001_m_000000_0 2024-08-04 19:14:34,113 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000000 Task Transitioned from RUNNING to SUCCEEDED 2024-08-04 19:14:34,113 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 2 2024-08-04 19:14:34,787 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1 ScheduledMaps:3 ScheduledReds:0 AssignedMaps:2 AssignedReds:0 CompletedMaps:2 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:2 RackLocal:0 2024-08-04 19:14:34,814 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1722798834650_0001_01_000003 2024-08-04 19:14:34,815 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1722798834650_0001_01_000002 2024-08-04 19:14:34,815 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 2 2024-08-04 19:14:34,815 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1722798834650_0001_m_000001_0: 2024-08-04 19:14:34,816 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000001_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED 2024-08-04 19:14:34,816 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1722798834650_0001_m_000000_0: 2024-08-04 19:14:34,816 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000000_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED 2024-08-04 19:14:34,816 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1722798834650_0001_01_000004 to attempt_1722798834650_0001_m_000002_0 2024-08-04 19:14:34,817 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1722798834650_0001_01_000005 to attempt_1722798834650_0001_m_000003_0 2024-08-04 19:14:34,817 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:34,817 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:34,817 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:2 AssignedReds:0 CompletedMaps:2 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:4 RackLocal:0 2024-08-04 19:14:34,817 INFO [ContainerLauncher #3] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1722798834650_0001_01_000002 taskAttempt attempt_1722798834650_0001_m_000000_0 2024-08-04 19:14:34,818 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1722798834650_0001_01_000003 taskAttempt attempt_1722798834650_0001_m_000001_0 2024-08-04 19:14:34,821 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m 2024-08-04 19:14:34,822 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000002_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED 2024-08-04 19:14:34,825 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m 2024-08-04 19:14:34,825 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000003_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED 2024-08-04 19:14:34,829 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1722798834650_0001_01_000004 taskAttempt attempt_1722798834650_0001_m_000002_0 2024-08-04 19:14:34,829 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1722798834650_0001_m_000002_0 2024-08-04 19:14:34,835 INFO [ContainerLauncher #5] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1722798834650_0001_01_000005 taskAttempt attempt_1722798834650_0001_m_000003_0 2024-08-04 19:14:34,835 INFO [ContainerLauncher #5] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1722798834650_0001_m_000003_0 2024-08-04 19:14:34,903 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1722798834650_0001_m_000002_0 : 43847 2024-08-04 19:14:34,904 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1722798834650_0001_m_000002_0] using containerId: [container_1722798834650_0001_01_000004 on NM: [localhost:34535] 2024-08-04 19:14:34,904 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000002_0 TaskAttempt Transitioned from ASSIGNED to RUNNING 2024-08-04 19:14:34,904 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000002 Task Transitioned from SCHEDULED to RUNNING 2024-08-04 19:14:34,929 INFO [ContainerLauncher #5] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1722798834650_0001_m_000003_0 : 43847 2024-08-04 19:14:34,930 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1722798834650_0001_m_000003_0] using containerId: [container_1722798834650_0001_01_000005 on NM: [localhost:34535] 2024-08-04 19:14:34,930 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000003_0 TaskAttempt Transitioned from ASSIGNED to RUNNING 2024-08-04 19:14:34,931 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000003 Task Transitioned from SCHEDULED to RUNNING 2024-08-04 19:14:35,827 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: applicationId=application_1722798834650_0001: ask=3 release=0 newContainers=0 finishedContainers=0 resourceLimit=<memory:0, vCores:5> knownNMs=1 2024-08-04 19:14:35,827 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:35,827 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:36,854 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:36,854 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:37,863 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:37,863 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:38,869 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:38,869 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:39,876 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:39,876 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:40,881 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:40,881 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:41,051 INFO [Socket Reader #1 for port 0] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1722798834650_0001 (auth:SIMPLE) from localhost:34438 / 127.0.0.1:34438 2024-08-04 19:14:41,087 INFO [IPC Server handler 4 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1722798834650_0001_m_000005 asked for a task 2024-08-04 19:14:41,087 INFO [IPC Server handler 4 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1722798834650_0001_m_000005 given task: attempt_1722798834650_0001_m_000003_0 2024-08-04 19:14:41,270 INFO [Socket Reader #1 for port 0] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1722798834650_0001 (auth:SIMPLE) from localhost:34442 / 127.0.0.1:34442 2024-08-04 19:14:41,315 INFO [IPC Server handler 2 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1722798834650_0001_m_000004 asked for a task 2024-08-04 19:14:41,315 INFO [IPC Server handler 2 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1722798834650_0001_m_000004 given task: attempt_1722798834650_0001_m_000002_0 2024-08-04 19:14:41,895 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:41,895 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:42,901 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:42,901 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:43,910 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:43,910 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:44,917 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:44,917 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.4 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1229, vCores:5> finalReduceResourceLimit:<memory:819, vCores:2> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:45,337 INFO [IPC Server handler 0 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000003_0 is : 0.0 2024-08-04 19:14:45,475 INFO [IPC Server handler 3 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000003_0 is : 1.0 2024-08-04 19:14:45,485 INFO [IPC Server handler 11 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgment from attempt_1722798834650_0001_m_000003_0 2024-08-04 19:14:45,489 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000003_0 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER 2024-08-04 19:14:45,489 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1722798834650_0001_m_000003_0 2024-08-04 19:14:45,489 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000003 Task Transitioned from RUNNING to SUCCEEDED 2024-08-04 19:14:45,489 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 3 2024-08-04 19:14:45,525 INFO [IPC Server handler 1 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000002_0 is : 0.0 2024-08-04 19:14:45,634 INFO [IPC Server handler 14 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000002_0 is : 1.0 2024-08-04 19:14:45,639 INFO [IPC Server handler 21 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgment from attempt_1722798834650_0001_m_000002_0 2024-08-04 19:14:45,641 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000002_0 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER 2024-08-04 19:14:45,647 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1722798834650_0001_m_000002_0 2024-08-04 19:14:45,647 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000002 Task Transitioned from RUNNING to SUCCEEDED 2024-08-04 19:14:45,647 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 4 2024-08-04 19:14:45,918 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:2 AssignedReds:0 CompletedMaps:4 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:4 RackLocal:0 2024-08-04 19:14:45,923 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:0, vCores:5> 2024-08-04 19:14:45,923 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: completedMapPercent 0.8 totalResourceLimit:<memory:2048, vCores:7> finalMapResourceLimit:<memory:1024, vCores:4> finalReduceResourceLimit:<memory:1024, vCores:3> netScheduledMapResource:<memory:3072, vCores:3> netScheduledReduceResource:<memory:0, vCores:0> 2024-08-04 19:14:45,923 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Ramping up 1 2024-08-04 19:14:45,923 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:1 AssignedMaps:2 AssignedReds:0 CompletedMaps:4 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:4 RackLocal:0 2024-08-04 19:14:46,933 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: applicationId=application_1722798834650_0001: ask=1 release=0 newContainers=1 finishedContainers=2 resourceLimit=<memory:1024, vCores:6> knownNMs=1 2024-08-04 19:14:46,933 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1722798834650_0001_01_000005 2024-08-04 19:14:46,933 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1722798834650_0001_01_000004 2024-08-04 19:14:46,933 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1 2024-08-04 19:14:46,933 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1722798834650_0001_m_000003_0: 2024-08-04 19:14:46,934 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000003_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED 2024-08-04 19:14:46,934 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1722798834650_0001_m_000002_0: 2024-08-04 19:14:46,934 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1722798834650_0001_01_000006 to attempt_1722798834650_0001_m_000004_0 2024-08-04 19:14:46,934 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000002_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED 2024-08-04 19:14:46,934 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:1 AssignedReds:0 CompletedMaps:4 CompletedReds:0 ContAlloc:5 ContRel:0 HostLocal:5 RackLocal:0 2024-08-04 19:14:46,937 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m 2024-08-04 19:14:46,938 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000004_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED 2024-08-04 19:14:46,939 INFO [ContainerLauncher #6] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1722798834650_0001_01_000005 taskAttempt attempt_1722798834650_0001_m_000003_0 2024-08-04 19:14:46,942 INFO [ContainerLauncher #7] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1722798834650_0001_01_000004 taskAttempt attempt_1722798834650_0001_m_000002_0 2024-08-04 19:14:46,946 INFO [ContainerLauncher #8] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1722798834650_0001_01_000006 taskAttempt attempt_1722798834650_0001_m_000004_0 2024-08-04 19:14:46,946 INFO [ContainerLauncher #8] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1722798834650_0001_m_000004_0 2024-08-04 19:14:46,985 INFO [ContainerLauncher #8] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1722798834650_0001_m_000004_0 : 43847 2024-08-04 19:14:46,985 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1722798834650_0001_m_000004_0] using containerId: [container_1722798834650_0001_01_000006 on NM: [localhost:34535] 2024-08-04 19:14:46,985 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000004_0 TaskAttempt Transitioned from ASSIGNED to RUNNING 2024-08-04 19:14:46,986 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000004 Task Transitioned from SCHEDULED to RUNNING 2024-08-04 19:14:47,949 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: applicationId=application_1722798834650_0001: ask=3 release=0 newContainers=1 finishedContainers=0 resourceLimit=<memory:0, vCores:5> knownNMs=1 2024-08-04 19:14:47,949 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1 2024-08-04 19:14:47,949 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned to reduce 2024-08-04 19:14:47,949 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1722798834650_0001_01_000007 to attempt_1722798834650_0001_r_000000_0 2024-08-04 19:14:47,949 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:1 CompletedMaps:4 CompletedReds:0 ContAlloc:6 ContRel:0 HostLocal:5 RackLocal:0 2024-08-04 19:14:47,989 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m 2024-08-04 19:14:48,000 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_r_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED 2024-08-04 19:14:48,008 INFO [ContainerLauncher #9] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1722798834650_0001_01_000007 taskAttempt attempt_1722798834650_0001_r_000000_0 2024-08-04 19:14:48,008 INFO [ContainerLauncher #9] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1722798834650_0001_r_000000_0 2024-08-04 19:14:48,112 INFO [ContainerLauncher #9] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1722798834650_0001_r_000000_0 : 43847 2024-08-04 19:14:48,112 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1722798834650_0001_r_000000_0] using containerId: [container_1722798834650_0001_01_000007 on NM: [localhost:34535] 2024-08-04 19:14:48,112 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_r_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING 2024-08-04 19:14:48,113 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_r_000000 Task Transitioned from SCHEDULED to RUNNING 2024-08-04 19:14:48,954 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: applicationId=application_1722798834650_0001: ask=1 release=0 newContainers=0 finishedContainers=0 resourceLimit=<memory:0, vCores:5> knownNMs=1 2024-08-04 19:14:53,473 INFO [Socket Reader #1 for port 0] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1722798834650_0001 (auth:SIMPLE) from localhost:58978 / 127.0.0.1:58978 2024-08-04 19:14:53,530 INFO [IPC Server handler 1 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1722798834650_0001_m_000006 asked for a task 2024-08-04 19:14:53,530 INFO [IPC Server handler 1 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1722798834650_0001_m_000006 given task: attempt_1722798834650_0001_m_000004_0 2024-08-04 19:14:54,873 INFO [Socket Reader #1 for port 0] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1722798834650_0001 (auth:SIMPLE) from localhost:58994 / 127.0.0.1:58994 2024-08-04 19:14:54,935 INFO [IPC Server handler 6 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1722798834650_0001_r_000007 asked for a task 2024-08-04 19:14:54,935 INFO [IPC Server handler 6 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1722798834650_0001_r_000007 given task: attempt_1722798834650_0001_r_000000_0 2024-08-04 19:14:57,673 INFO [IPC Server handler 6 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000004_0 is : 0.0 2024-08-04 19:14:57,820 INFO [IPC Server handler 4 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_m_000004_0 is : 1.0 2024-08-04 19:14:57,841 INFO [IPC Server handler 5 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgment from attempt_1722798834650_0001_m_000004_0 2024-08-04 19:14:57,843 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000004_0 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER 2024-08-04 19:14:57,843 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1722798834650_0001_m_000004_0 2024-08-04 19:14:57,845 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_m_000004 Task Transitioned from RUNNING to SUCCEEDED 2024-08-04 19:14:57,845 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 5 2024-08-04 19:14:58,031 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:1 CompletedMaps:5 CompletedReds:0 ContAlloc:6 ContRel:0 HostLocal:5 RackLocal:0 2024-08-04 19:14:58,052 INFO [IPC Server handler 2 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: MapCompletionEvents request from attempt_1722798834650_0001_r_000000_0. startIndex 0 maxEvents 10000 2024-08-04 19:14:58,388 INFO [IPC Server handler 3 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_r_000000_0 is : 0.0 2024-08-04 19:14:59,042 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1722798834650_0001_01_000006 2024-08-04 19:14:59,042 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:1 CompletedMaps:5 CompletedReds:0 ContAlloc:6 ContRel:0 HostLocal:5 RackLocal:0 2024-08-04 19:14:59,043 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1722798834650_0001_m_000004_0: 2024-08-04 19:14:59,043 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_m_000004_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED 2024-08-04 19:14:59,043 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1722798834650_0001_01_000006 taskAttempt attempt_1722798834650_0001_m_000004_0 2024-08-04 19:14:59,151 INFO [IPC Server handler 0 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit-pending state update from attempt_1722798834650_0001_r_000000_0 2024-08-04 19:14:59,154 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_r_000000_0 TaskAttempt Transitioned from RUNNING to COMMIT_PENDING 2024-08-04 19:14:59,155 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: attempt_1722798834650_0001_r_000000_0 given a go for committing the task output. 2024-08-04 19:14:59,159 INFO [IPC Server handler 3 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit go/no-go request from attempt_1722798834650_0001_r_000000_0 2024-08-04 19:14:59,159 INFO [IPC Server handler 3 on default port 40041] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Result of canCommit for attempt_1722798834650_0001_r_000000_0:true 2024-08-04 19:14:59,210 INFO [IPC Server handler 11 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1722798834650_0001_r_000000_0 is : 1.0 2024-08-04 19:14:59,215 INFO [IPC Server handler 1 on default port 40041] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgment from attempt_1722798834650_0001_r_000000_0 2024-08-04 19:14:59,218 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_r_000000_0 TaskAttempt Transitioned from COMMIT_PENDING to SUCCESS_FINISHING_CONTAINER 2024-08-04 19:14:59,221 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1722798834650_0001_r_000000_0 2024-08-04 19:14:59,222 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1722798834650_0001_r_000000 Task Transitioned from RUNNING to SUCCEEDED 2024-08-04 19:14:59,222 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 6 2024-08-04 19:14:59,223 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1722798834650_0001Job Transitioned from RUNNING to COMMITTING 2024-08-04 19:14:59,249 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_COMMIT 2024-08-04 19:14:59,341 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Calling handler for JobFinishedEvent 2024-08-04 19:14:59,342 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1722798834650_0001Job Transitioned from COMMITTING to SUCCEEDED 2024-08-04 19:14:59,356 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Job finished cleanly, recording last MRAppMaster retry 2024-08-04 19:14:59,356 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true 2024-08-04 19:14:59,356 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: RMCommunicator notified that shouldUnregistered is: true 2024-08-04 19:14:59,356 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true 2024-08-04 19:14:59,356 INFO [Thread-78] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true 2024-08-04 19:14:59,356 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services 2024-08-04 19:14:59,362 INFO [Thread-78] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 0 2024-08-04 19:14:59,410 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://localhost:8902/tmp/hadoop-yarn/staging/root/.staging/job_1722798834650_0001/job_1722798834650_0001_1.jhist to hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001-1722798847160-root-QuasiMonteCarlo-1722798899334-5-1-SUCCEEDED-root.default-1722798860122.jhist_tmp 2024-08-04 19:14:59,453 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied from: hdfs://localhost:8902/tmp/hadoop-yarn/staging/root/.staging/job_1722798834650_0001/job_1722798834650_0001_1.jhist to done location: hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001-1722798847160-root-QuasiMonteCarlo-1722798899334-5-1-SUCCEEDED-root.default-1722798860122.jhist_tmp 2024-08-04 19:14:59,454 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Set historyUrl to http://446a375b7fe4:19888/jobhistory/job/job_1722798834650_0001 2024-08-04 19:14:59,456 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://localhost:8902/tmp/hadoop-yarn/staging/root/.staging/job_1722798834650_0001/job_1722798834650_0001_1_conf.xml to hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001_conf.xml_tmp 2024-08-04 19:14:59,508 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied from: hdfs://localhost:8902/tmp/hadoop-yarn/staging/root/.staging/job_1722798834650_0001/job_1722798834650_0001_1_conf.xml to done location: hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001_conf.xml_tmp 2024-08-04 19:14:59,521 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001.summary_tmp to hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001.summary 2024-08-04 19:14:59,530 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001_conf.xml_tmp to hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001_conf.xml 2024-08-04 19:14:59,536 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001-1722798847160-root-QuasiMonteCarlo-1722798899334-5-1-SUCCEEDED-root.default-1722798860122.jhist_tmp to hdfs://localhost:8902/tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1722798834650_0001-1722798847160-root-QuasiMonteCarlo-1722798899334-5-1-SUCCEEDED-root.default-1722798860122.jhist 2024-08-04 19:14:59,540 INFO [Thread-78] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop() 2024-08-04 19:14:59,541 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1722798834650_0001_r_000000_0 2024-08-04 19:14:59,582 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1722798834650_0001_r_000000_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED 2024-08-04 19:14:59,589 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Setting job diagnostics to 2024-08-04 19:14:59,589 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: History url is http://446a375b7fe4:19888/jobhistory/job/job_1722798834650_0001 2024-08-04 19:14:59,638 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Waiting for application to be successfully unregistered. 2024-08-04 19:15:00,641 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:1 CompletedMaps:5 CompletedReds:0 ContAlloc:6 ContRel:0 HostLocal:5 RackLocal:0 2024-08-04 19:15:00,643 INFO [Thread-78] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://localhost:8902/ /tmp/hadoop-yarn/staging/root/.staging/job_1722798834650_0001 2024-08-04 19:15:00,646 INFO [Thread-78] org.apache.hadoop.ipc.Server: Stopping server on 40041 2024-08-04 19:15:00,647 INFO [IPC Server listener on 0] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 0 2024-08-04 19:15:00,649 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2024-08-04 19:15:00,654 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted 2024-08-04 19:15:00,655 INFO [Ping Checker for TaskAttemptFinishingMonitor] org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: TaskAttemptFinishingMonitor thread interrupted 2024-08-04 19:15:05,657 INFO [Thread-78] org.apache.hadoop.ipc.Server: Stopping server on 34269 2024-08-04 19:15:05,658 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2024-08-04 19:15:05,658 INFO [IPC Server listener on 0] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 0 2024-08-04 19:15:05,674 INFO [Thread-78] org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.w.WebAppContext@7a1ddbf1{mapreduce,/,null,STOPPED}{jar:file:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar!/webapps/mapreduce} 2024-08-04 19:15:05,691 INFO [Thread-78] org.eclipse.jetty.server.AbstractConnector: Stopped ServerConnector@58fbd02e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2024-08-04 19:15:05,691 INFO [Thread-78] org.eclipse.jetty.server.session: node0 Stopped scavenging 2024-08-04 19:15:05,692 INFO [Thread-78] org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@48a663e9{static,/static,jar:file:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar!/webapps/static,STOPPED} End of LogType:syslog *********************************************************************** Container: container_1722798834650_0001_01_000006 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:directory.info LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:2101 LogContents: ls -l: total 32 -rw-r--r-- 1 root root 129 Aug 4 19:14 container_tokens -rwx------ 1 root root 964 Aug 4 19:14 default_container_executor_session.sh -rwx------ 1 root root 1019 Aug 4 19:14 default_container_executor.sh lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.jar -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.xml -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml -rwx------ 1 root root 8159 Aug 4 19:14 launch_container.sh drwx--x--- 2 root root 4096 Aug 4 19:14 tmp find -L . -maxdepth 5 -ls: 809806 4 drwx--x--- 3 root root 4096 Aug 4 19:14 . 809828 4 -rwx------ 1 root root 1019 Aug 4 19:14 ./default_container_executor.sh 809751 260 -r-x------ 1 root root 264361 Aug 4 19:14 ./job.xml 809822 4 -rw-r--r-- 1 root root 129 Aug 4 19:14 ./container_tokens 809826 4 -rwx------ 1 root root 964 Aug 4 19:14 ./default_container_executor_session.sh 809824 8 -rwx------ 1 root root 8159 Aug 4 19:14 ./launch_container.sh 809829 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor.sh.crc 809825 4 -rw-r--r-- 1 root root 72 Aug 4 19:14 ./.launch_container.sh.crc 809745 4 drwx------ 2 root root 4096 Aug 4 19:14 ./job.jar 809746 276 -r-x------ 1 root root 281609 Aug 4 19:14 ./job.jar/job.jar 809823 4 -rw-r--r-- 1 root root 12 Aug 4 19:14 ./.container_tokens.crc 809827 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor_session.sh.crc 809821 4 drwx--x--- 2 root root 4096 Aug 4 19:14 ./tmp broken symlinks(find -L . -maxdepth 5 -type l -ls): End of LogType:directory.info ******************************************************************************* Container: container_1722798834650_0001_01_000006 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:launch_container.sh LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:8159 LogContents: #!/bin/bash set -o pipefail -e export PRELAUNCH_OUT="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/prelaunch.out" exec >"${PRELAUNCH_OUT}" export PRELAUNCH_ERR="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/prelaunch.err" exec 2>"${PRELAUNCH_ERR}" echo "Setting up env variables" export JAVA_HOME=${JAVA_HOME:-"/usr/lib/jvm/java-11-openjdk-amd64"} export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-"/content/hadoop-3.4.0"} export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/content/hadoop-3.4.0/etc/hadoop"} export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HOME=${HADOOP_HOME:-"/content/hadoop-3.4.0"} export PATH=${PATH:-"/content/hadoop-3.4.0/bin:/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin"} export LANG=${LANG:-"en_US.UTF-8"} export HADOOP_TOKEN_FILE_LOCATION="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000006/container_tokens" export CONTAINER_ID="container_1722798834650_0001_01_000006" export NM_PORT="34535" export NM_HOST="localhost" export NM_HTTP_PORT="37361" export LOCAL_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001" export LOCAL_USER_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/" export LOG_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000006,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000006,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000006,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006" export USER="root" export LOGNAME="root" export HOME="/home/" export PWD="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000006" export LOCALIZATION_COUNTERS="0,548046,0,2,1" export JVM_PID="$$" export NM_AUX_SERVICE_mapreduce_shuffle="AACrRwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" export STDOUT_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/stdout" export SHELL="/bin/bash" export HADOOP_ROOT_LOGGER="INFO,console" export CLASSPATH="$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/*:job.jar/classes/:job.jar/lib/*:$PWD/*" export LD_LIBRARY_PATH="$PWD:$HADOOP_COMMON_HOME/lib/native" export STDERR_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/stderr" export HADOOP_CLIENT_OPTS="" export MALLOC_ARENA_MAX="4" echo "Setting up job resources" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml" "job.xml" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar" "job.jar" echo "Copying debugging information" # Creating copy of launch script cp "launch_container.sh" "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/launch_container.sh" chmod 640 "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/launch_container.sh" # Determining directory contents echo "ls -l:" 1>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/directory.info" ls -l 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/directory.info" echo "find -L . -maxdepth 5 -ls:" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/directory.info" find -L . -maxdepth 5 -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/directory.info" echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/directory.info" find -L . -maxdepth 5 -type l -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/directory.info" echo "Launching container" exec /bin/bash -c "$JAVA_HOME/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=$PWD/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 127.0.0.1 40041 attempt_1722798834650_0001_m_000004_0 6 1>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/stdout 2>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000006/stderr " End of LogType:launch_container.sh ************************************************************************************ Container: container_1722798834650_0001_01_000006 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.err LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:prelaunch.err ****************************************************************************** Container: container_1722798834650_0001_01_000006 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.out LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:100 LogContents: Setting up env variables Setting up job resources Copying debugging information Launching container End of LogType:prelaunch.out ****************************************************************************** Container: container_1722798834650_0001_01_000006 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stderr LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stderr *********************************************************************** Container: container_1722798834650_0001_01_000006 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stdout LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stdout *********************************************************************** Container: container_1722798834650_0001_01_000006 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:syslog LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:31510 LogContents: 2024-08-04 19:14:51,129 INFO [main] org.apache.hadoop.security.SecurityUtil: Updating Configuration 2024-08-04 19:14:51,786 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2024-08-04 19:14:52,527 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2024-08-04 19:14:52,527 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started 2024-08-04 19:14:52,926 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens: [Kind: mapreduce.job, Service: job_1722798834650_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@70f02c32)] 2024-08-04 19:14:53,089 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now. 2024-08-04 19:14:53,846 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001 2024-08-04 19:14:55,154 INFO [main] org.apache.hadoop.mapred.YarnChild: /************************************************************ [system properties] os.name: Linux os.version: 6.1.85+ java.home: /usr/lib/jvm/java-11-openjdk-amd64 java.runtime.version: 11.0.24+8-post-Ubuntu-1ubuntu322.04 java.vendor: Ubuntu java.version: 11.0.24 java.vm.name: OpenJDK 64-Bit Server VM java.class.path: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000006:/content/hadoop-3.4.0/etc/hadoop:/content/hadoop-3.4.0/share/hadoop/common/hadoop-kms-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/objenesis-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jna-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/codemodel-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-tree-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/fst-2.50.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-commons-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/mockito-core-2.28.2.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000006/job.jar java.io.tmpdir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000006/tmp user.dir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000006 user.name: root ************************************************************/ 2024-08-04 19:14:55,155 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 2024-08-04 19:14:56,696 INFO [main] org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:14:56,703 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:14:56,703 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:14:56,760 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:14:57,364 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://localhost:8902/user/root/QuasiMonteCarlo_1722798838891_1650860168/in/part4:0+118 2024-08-04 19:14:57,582 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 2024-08-04 19:14:57,583 INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 100 2024-08-04 19:14:57,583 INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 83886080 2024-08-04 19:14:57,583 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 104857600 2024-08-04 19:14:57,583 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396; length = 6553600 2024-08-04 19:14:57,591 INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 2024-08-04 19:14:57,683 INFO [main] org.apache.hadoop.mapred.MapTask: Starting flush of map output 2024-08-04 19:14:57,683 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling map output 2024-08-04 19:14:57,683 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufend = 18; bufvoid = 104857600 2024-08-04 19:14:57,683 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600 2024-08-04 19:14:57,692 INFO [main] org.apache.hadoop.mapred.MapTask: Finished spill 0 2024-08-04 19:14:57,787 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1722798834650_0001_m_000004_0 is done. And is in the process of committing 2024-08-04 19:14:57,844 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1722798834650_0001_m_000004_0' done. 2024-08-04 19:14:57,878 INFO [main] org.apache.hadoop.mapred.Task: Final Counters for attempt_1722798834650_0001_m_000004_0: Counters: 28 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=309460 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=264 HDFS: Number of bytes written=0 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 HDFS: Number of bytes read erasure-coded=0 Map-Reduce Framework Map input records=1 Map output records=2 Map output bytes=18 Map output materialized bytes=28 Input split bytes=146 Combine input records=0 Spilled Records=2 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=91 CPU time spent (ms)=840 Physical memory (bytes) snapshot=361263104 Virtual memory (bytes) snapshot=2716454912 Total committed heap usage (bytes)=273678336 Peak Map Physical memory (bytes)=361263104 Peak Map Virtual memory (bytes)=2716454912 File Input Format Counters Bytes Read=118 2024-08-04 19:14:57,880 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system... 2024-08-04 19:14:57,883 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped. 2024-08-04 19:14:57,883 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete. End of LogType:syslog *********************************************************************** Container: container_1722798834650_0001_01_000007 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:directory.info LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:2101 LogContents: ls -l: total 36 -rw-r--r-- 1 root root 129 Aug 4 19:14 container_tokens -rwx------ 1 root root 964 Aug 4 19:14 default_container_executor_session.sh -rwx------ 1 root root 1019 Aug 4 19:14 default_container_executor.sh lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.jar -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar lrwxrwxrwx 1 root root 179 Aug 4 19:14 job.xml -> /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml -rwx------ 1 root root 8351 Aug 4 19:14 launch_container.sh drwx--x--- 2 root root 4096 Aug 4 19:14 tmp find -L . -maxdepth 5 -ls: 809887 4 drwx--x--- 3 root root 4096 Aug 4 19:14 . 809917 4 -rwx------ 1 root root 1019 Aug 4 19:14 ./default_container_executor.sh 809751 260 -r-x------ 1 root root 264361 Aug 4 19:14 ./job.xml 809911 4 -rw-r--r-- 1 root root 129 Aug 4 19:14 ./container_tokens 809915 4 -rwx------ 1 root root 964 Aug 4 19:14 ./default_container_executor_session.sh 809913 12 -rwx------ 1 root root 8351 Aug 4 19:14 ./launch_container.sh 809918 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor.sh.crc 809914 4 -rw-r--r-- 1 root root 76 Aug 4 19:14 ./.launch_container.sh.crc 809745 4 drwx------ 2 root root 4096 Aug 4 19:14 ./job.jar 809746 276 -r-x------ 1 root root 281609 Aug 4 19:14 ./job.jar/job.jar 809912 4 -rw-r--r-- 1 root root 12 Aug 4 19:14 ./.container_tokens.crc 809916 4 -rw-r--r-- 1 root root 16 Aug 4 19:14 ./.default_container_executor_session.sh.crc 809910 4 drwx--x--- 2 root root 4096 Aug 4 19:14 ./tmp broken symlinks(find -L . -maxdepth 5 -type l -ls): End of LogType:directory.info ******************************************************************************* Container: container_1722798834650_0001_01_000007 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:launch_container.sh LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:8351 LogContents: #!/bin/bash set -o pipefail -e export PRELAUNCH_OUT="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/prelaunch.out" exec >"${PRELAUNCH_OUT}" export PRELAUNCH_ERR="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/prelaunch.err" exec 2>"${PRELAUNCH_ERR}" echo "Setting up env variables" export JAVA_HOME=${JAVA_HOME:-"/usr/lib/jvm/java-11-openjdk-amd64"} export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-"/content/hadoop-3.4.0"} export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/content/hadoop-3.4.0/etc/hadoop"} export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/content/hadoop-3.4.0"} export HADOOP_HOME=${HADOOP_HOME:-"/content/hadoop-3.4.0"} export PATH=${PATH:-"/content/hadoop-3.4.0/bin:/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin"} export LANG=${LANG:-"en_US.UTF-8"} export HADOOP_TOKEN_FILE_LOCATION="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000007/container_tokens" export CONTAINER_ID="container_1722798834650_0001_01_000007" export NM_PORT="34535" export NM_HOST="localhost" export NM_HTTP_PORT="37361" export LOCAL_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001" export LOCAL_USER_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/" export LOG_DIRS="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_0/application_1722798834650_0001/container_1722798834650_0001_01_000007,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_2/application_1722798834650_0001/container_1722798834650_0001_01_000007,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_3/application_1722798834650_0001/container_1722798834650_0001_01_000007" export USER="root" export LOGNAME="root" export HOME="/home/" export PWD="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000007" export LOCALIZATION_COUNTERS="0,548046,0,2,14" export JVM_PID="$$" export NM_AUX_SERVICE_mapreduce_shuffle="AACrRwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" export STDOUT_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/stdout" export SHELL="/bin/bash" export HADOOP_ROOT_LOGGER="INFO,console" export CLASSPATH="$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/*:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/*:job.jar/classes/:job.jar/lib/*:$PWD/*" export LD_LIBRARY_PATH="$PWD:$HADOOP_COMMON_HOME/lib/native" export STDERR_LOGFILE_ENV="/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/stderr" export HADOOP_CLIENT_OPTS="" export MALLOC_ARENA_MAX="4" echo "Setting up job resources" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/13/job.xml" "job.xml" ln -sf -- "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001/filecache/11/job.jar" "job.jar" echo "Copying debugging information" # Creating copy of launch script cp "launch_container.sh" "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/launch_container.sh" chmod 640 "/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/launch_container.sh" # Determining directory contents echo "ls -l:" 1>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/directory.info" ls -l 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/directory.info" echo "find -L . -maxdepth 5 -ls:" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/directory.info" find -L . -maxdepth 5 -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/directory.info" echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/directory.info" find -L . -maxdepth 5 -type l -ls 1>>"/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/directory.info" echo "Launching container" exec /bin/bash -c "$JAVA_HOME/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=$PWD/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dyarn.app.mapreduce.shuffle.logger=INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile=syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize=0 -Dyarn.app.mapreduce.shuffle.log.backups=0 org.apache.hadoop.mapred.YarnChild 127.0.0.1 40041 attempt_1722798834650_0001_r_000000_0 7 1>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/stdout 2>/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-logDir-nm-0_1/application_1722798834650_0001/container_1722798834650_0001_01_000007/stderr " End of LogType:launch_container.sh ************************************************************************************ Container: container_1722798834650_0001_01_000007 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.err LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:prelaunch.err ****************************************************************************** Container: container_1722798834650_0001_01_000007 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:prelaunch.out LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:100 LogContents: Setting up env variables Setting up job resources Copying debugging information Launching container End of LogType:prelaunch.out ****************************************************************************** Container: container_1722798834650_0001_01_000007 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stderr LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stderr *********************************************************************** Container: container_1722798834650_0001_01_000007 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:stdout LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:0 LogContents: End of LogType:stdout *********************************************************************** Container: container_1722798834650_0001_01_000007 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:syslog LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:31103 LogContents: 2024-08-04 19:14:53,552 INFO [main] org.apache.hadoop.security.SecurityUtil: Updating Configuration 2024-08-04 19:14:53,885 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2024-08-04 19:14:54,218 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2024-08-04 19:14:54,218 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system started 2024-08-04 19:14:54,418 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens: [Kind: mapreduce.job, Service: job_1722798834650_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@1fa1cab1)] 2024-08-04 19:14:54,585 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now. 2024-08-04 19:14:55,264 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_0/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_1/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001,/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_3/usercache/root/appcache/application_1722798834650_0001 2024-08-04 19:14:56,558 INFO [main] org.apache.hadoop.mapred.YarnChild: /************************************************************ [system properties] os.name: Linux os.version: 6.1.85+ java.home: /usr/lib/jvm/java-11-openjdk-amd64 java.runtime.version: 11.0.24+8-post-Ubuntu-1ubuntu322.04 java.vendor: Ubuntu java.version: 11.0.24 java.vm.name: OpenJDK 64-Bit Server VM java.class.path: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000007:/content/hadoop-3.4.0/etc/hadoop:/content/hadoop-3.4.0/share/hadoop/common/hadoop-kms-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/hadoop-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/bcprov-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jul-to-slf4j-1.7.36.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/common/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-client-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-io-2.14.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-all-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-cli-1.5.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/woodstox-core-5.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/token-provider-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-crypto-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-logging-1.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-buffer-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-auth-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-xdr-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-text-1.10.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/HikariCP-4.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/metrics-core-3.2.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/avro-1.9.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/reload4j-1.2.22.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-configuration2-2.8.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-math3-3.6.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-codec-1.15.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-xml-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-client-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jettison-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-config-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-server-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-redis-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-asn1-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-identity-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-udt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/gson-2.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-framework-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-socks-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-annotations-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-webapp-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-admin-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/curator-recipes-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-server-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerby-pkix-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-xml-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-io-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-http-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jline-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-client-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-simplekdc-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-core-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-util-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-http2-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/zookeeper-jute-3.8.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jetty-security-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-net-3.9.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-dns-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/commons-compress-1.24.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-core-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jersey-json-1.20.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/audience-annotations-0.12.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/dnsjava-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.31.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-util-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-handler-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/kerb-common-2.0.3.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_21-1.2.0.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.100.Final-linux-x86_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/snappy-java-1.1.10.4.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-common-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.100.Final-osx-aarch_64.jar:/content/hadoop-3.4.0/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.100.Final.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-router-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-client-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-registry-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-globalpolicygenerator-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-services-api-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-plus-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/objenesis-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jna-5.2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcpkix-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/snakeyaml-2.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-servlet-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/codemodel-2.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-tree-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-api-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-client-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/bcutil-jdk15on-1.70.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/fst-2.50.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/guice-4.2.3.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jsonschema2pojo-core-1.0.2.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-common-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-jndi-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/asm-commons-9.6.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-guice-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jersey-client-1.19.4.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-servlet-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jetty-annotations-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.12.7.jar:/content/hadoop-3.4.0/share/hadoop/yarn/lib/websocket-server-9.4.53.v20231009.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/mockito-core-2.28.2.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.4.0-tests.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.4.0.jar:/content/hadoop-3.4.0/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:/content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000007/job.jar java.io.tmpdir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000007/tmp user.dir: /content/target/test/data/MiniHadoopClusterManager/yarn-857754/MiniHadoopClusterManager-localDir-nm-0_2/usercache/root/appcache/application_1722798834650_0001/container_1722798834650_0001_01_000007 user.name: root ************************************************************/ 2024-08-04 19:14:56,559 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 2024-08-04 19:14:57,910 INFO [main] org.apache.hadoop.mapreduce.lib.output.PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory 2024-08-04 19:14:57,914 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 2 2024-08-04 19:14:57,915 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2024-08-04 19:14:57,986 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2024-08-04 19:14:58,019 INFO [main] org.apache.hadoop.mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@889d9e8 2024-08-04 19:14:58,022 WARN [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system already initialized! 2024-08-04 19:14:58,788 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 2024-08-04 19:14:59,138 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1722798834650_0001_r_000000_0 is done. And is in the process of committing 2024-08-04 19:14:59,160 INFO [main] org.apache.hadoop.mapred.Task: Task attempt_1722798834650_0001_r_000000_0 is allowed to commit now 2024-08-04 19:14:59,189 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_1722798834650_0001_r_000000_0' to hdfs://localhost:8902/user/root/QuasiMonteCarlo_1722798838891_1650860168/out 2024-08-04 19:14:59,217 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1722798834650_0001_r_000000_0' done. 2024-08-04 19:14:59,241 INFO [main] org.apache.hadoop.mapred.Task: Final Counters for attempt_1722798834650_0001_r_000000_0: Counters: 35 File System Counters FILE: Number of bytes read=116 FILE: Number of bytes written=309493 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=0 HDFS: Number of bytes written=215 HDFS: Number of read operations=5 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 HDFS: Number of bytes read erasure-coded=0 Map-Reduce Framework Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=140 Reduce input records=10 Reduce output records=0 Spilled Records=10 Shuffled Maps =5 Failed Shuffles=0 Merged Map outputs=5 GC time elapsed (ms)=92 CPU time spent (ms)=1050 Physical memory (bytes) snapshot=217210880 Virtual memory (bytes) snapshot=2730729472 Total committed heap usage (bytes)=216006656 Peak Reduce Physical memory (bytes)=217210880 Peak Reduce Virtual memory (bytes)=2730729472 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Output Format Counters Bytes Written=97 2024-08-04 19:14:59,243 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping ReduceTask metrics system... 2024-08-04 19:14:59,243 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system stopped. 2024-08-04 19:14:59,243 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system shutdown complete. End of LogType:syslog *********************************************************************** Container: container_1722798834650_0001_01_000007 on localhost_34535 LogAggregationType: AGGREGATED ==================================================================== LogType:syslog.shuffle LogLastModifiedTime:Sun Aug 04 19:15:07 +0000 2024 LogLength:4984 LogContents: 2024-08-04 19:14:58,045 INFO [main] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: MergerManager: memoryLimit=601882624, maxSingleShuffleLimit=150470656, mergeThreshold=397242560, ioSortFactor=10, memToMemMergeOutputsThreshold=10 2024-08-04 19:14:58,048 INFO [EventFetcher for fetching Map Completion Events] org.apache.hadoop.mapreduce.task.reduce.EventFetcher: attempt_1722798834650_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 2024-08-04 19:14:58,069 INFO [EventFetcher for fetching Map Completion Events] org.apache.hadoop.mapreduce.task.reduce.EventFetcher: attempt_1722798834650_0001_r_000000_0: Got 5 new map-outputs 2024-08-04 19:14:58,348 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#1 about to shuffle output of map attempt_1722798834650_0001_m_000001_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:14:58,359 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_1722798834650_0001_m_000001_0 2024-08-04 19:14:58,362 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->24 2024-08-04 19:14:58,370 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#1 about to shuffle output of map attempt_1722798834650_0001_m_000000_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:14:58,370 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_1722798834650_0001_m_000000_0 2024-08-04 19:14:58,372 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 2, commitMemory -> 24, usedMemory ->48 2024-08-04 19:14:58,374 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#1 about to shuffle output of map attempt_1722798834650_0001_m_000003_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:14:58,375 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_1722798834650_0001_m_000003_0 2024-08-04 19:14:58,375 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 3, commitMemory -> 48, usedMemory ->72 2024-08-04 19:14:58,378 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#1 about to shuffle output of map attempt_1722798834650_0001_m_000002_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:14:58,378 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_1722798834650_0001_m_000002_0 2024-08-04 19:14:58,378 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 4, commitMemory -> 72, usedMemory ->96 2024-08-04 19:14:58,379 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#1 about to shuffle output of map attempt_1722798834650_0001_m_000004_0 decomp: 24 len: 28 to MEMORY 2024-08-04 19:14:58,379 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput: Read 24 bytes from map-output for attempt_1722798834650_0001_m_000004_0 2024-08-04 19:14:58,379 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 24, inMemoryMapOutputs.size() -> 5, commitMemory -> 96, usedMemory ->120 2024-08-04 19:14:58,381 INFO [EventFetcher for fetching Map Completion Events] org.apache.hadoop.mapreduce.task.reduce.EventFetcher: EventFetcher is interrupted.. Returning 2024-08-04 19:14:58,381 INFO [fetcher#1] org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl: localhost:43847 freed by fetcher#1 in 312ms 2024-08-04 19:14:58,394 INFO [main] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: finalMerge called with 5 in-memory map-outputs and 0 on-disk map-outputs 2024-08-04 19:14:58,423 INFO [main] org.apache.hadoop.mapred.Merger: Merging 5 sorted segments 2024-08-04 19:14:58,423 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 5 segments left of total size: 105 bytes 2024-08-04 19:14:58,440 INFO [main] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merged 5 segments, 120 bytes to disk to satisfy reduce memory limit 2024-08-04 19:14:58,447 INFO [main] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merging 1 files, 116 bytes from disk 2024-08-04 19:14:58,448 INFO [main] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce 2024-08-04 19:14:58,448 INFO [main] org.apache.hadoop.mapred.Merger: Merging 1 sorted segments 2024-08-04 19:14:58,486 INFO [main] org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 109 bytes End of LogType:syslog.shuffle *******************************************************************************
2024-08-04 19:15:45,913 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at localhost/127.0.0.1:8903