{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Distributed Data Processing using Apache Spark and SageMaker Processing with Magic" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\u001B[0;31mDocstring:\u001B[0m\n", "::\n", "\n", " %pyspark [--base_job_name BASE_JOB_NAME] [--submit_app SUBMIT_APP]\n", " [--framework_version FRAMEWORK_VERSION]\n", " [--instance_type INSTANCE_TYPE]\n", " [--instance_count INSTANCE_COUNT]\n", " [--max_runtime_in_seconds MAX_RUNTIME_IN_SECONDS]\n", " [--submit_py_files [SUBMIT_PY_FILES [SUBMIT_PY_FILES ...]]]\n", " [--submit_jars [SUBMIT_JARS [SUBMIT_JARS ...]]]\n", " [--submit_files [SUBMIT_FILES [SUBMIT_FILES ...]]]\n", " [--arguments '--foo bar --baz 123']\n", " [--spark_event_logs_s3_uri SPARK_EVENT_LOGS_S3_URI]\n", " [--logs [LOGS]] [--name_contains NAME_CONTAINS]\n", " [--max_result MAX_RESULT]\n", " {submit,list,status,delete}\n", "\n", "Pyspark processor magic command\n", "\n", "positional arguments:\n", " {submit,list,status,delete}\n", "\n", "processor:\n", " --base_job_name BASE_JOB_NAME\n", " Prefix for processing name. If not specified, the\n", " processor generates a default job name, based on the\n", " training image name and current timestamp.\n", " --submit_app SUBMIT_APP\n", " Path (local or S3) to Python file to submit to Spark\n", " as the primary application\n", " --framework_version FRAMEWORK_VERSION\n", " The version of SageMaker PySpark.\n", " --instance_type INSTANCE_TYPE\n", " Type of EC2 instance to use for processing, for\n", " example, ‘ml.c4.xlarge’.\n", " --instance_count INSTANCE_COUNT\n", " Number of Amazon EC2 instances to use for processing.\n", " --max_runtime_in_seconds MAX_RUNTIME_IN_SECONDS\n", " Timeout in seconds. After this amount of time Amazon\n", " SageMaker terminates the job regardless of its current\n", " status.\n", " --submit_py_files <[SUBMIT_PY_FILES [SUBMIT_PY_FILES ...]]>\n", " You can specify any python dependencies or files that\n", " your script depends on\n", " --submit_jars <[SUBMIT_JARS [SUBMIT_JARS ...]]>\n", " You can specify any jar dependencies or files that\n", " your script depends on\n", " --submit_files <[SUBMIT_FILES [SUBMIT_FILES ...]]>\n", " List of .zip, .egg, or .py files to place on the\n", " PYTHONPATH for Python apps.\n", " --arguments <'--foo bar --baz 123'>\n", " A list of string arguments to be passed to a\n", " processing job\n", " --spark_event_logs_s3_uri SPARK_EVENT_LOGS_S3_URI\n", " S3 path where spark application events will be\n", " published to.\n", " --logs <[LOGS]> Whether to show the logs produced by the job.\n", "\n", "list:\n", " --name_contains NAME_CONTAINS\n", " --max_result MAX_RESULT\n", "\u001B[0;31mFile:\u001B[0m /opt/conda/lib/python3.8/site-packages/sage_maker_kernel/kernelmagics.py\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%pyspark?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup S3 bucket locations\n", "\n", "First, setup some locations in the default SageMaker bucket to store the raw input datasets and the Spark job output." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Couldn't call 'get_role' to get Role ARN from role name workshop-sagemaker to get Role path.\n" ] } ], "source": [ "import logging\n", "import sagemaker\n", "from time import gmtime, strftime\n", "\n", "sagemaker_logger = logging.getLogger(\"sagemaker\")\n", "sagemaker_logger.setLevel(logging.INFO)\n", "sagemaker_logger.addHandler(logging.StreamHandler())\n", "\n", "sagemaker_session = sagemaker.Session()\n", "bucket = sagemaker_session.default_bucket()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, you'll download the example dataset from a SageMaker staging bucket." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--2020-12-17 16:38:38-- https://s3-us-west-2.amazonaws.com/sparkml-mleap/data/abalone/abalone.csv\n", "Resolving s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)... 52.218.252.144\n", "Connecting to s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)|52.218.252.144|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 191873 (187K) [binary/octet-stream]\n", "Saving to: ‘abalone.csv’\n", "\n", "abalone.csv 100%[===================>] 187.38K 506KB/s in 0.4s \n", "\n", "2020-12-17 16:38:39 (506 KB/s) - ‘abalone.csv’ saved [191873/191873]\n", "\n" ] } ], "source": [ "# Fetch the dataset from the SageMaker bucket\n", "!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/data/abalone/abalone.csv -O abalone.csv" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sagemaker-eu-west-1-245582572290 sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone\n" ] } ], "source": [ "# Upload the raw input dataset to a unique S3 location\n", "timestamp_prefix = strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n", "prefix = \"sagemaker/spark-preprocess-demo/{}\".format(timestamp_prefix)\n", "input_prefix_abalone = \"{}/input/raw/abalone\".format(prefix)\n", "input_preprocessed_prefix_abalone = \"{}/input/preprocessed/abalone\".format(prefix)\n", "sagemaker_session.upload_data(path='abalone.csv', bucket=bucket, key_prefix=input_prefix_abalone)\n", "print(bucket, input_prefix_abalone)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Write the PySpark script\n", "\n", "The source for a preprocessing script is in the cell below. The cell uses the `%%pyspark submit` directive to submit python application from cell to PySparkProcessor. This script does some basic feature engineering on a raw input dataset. In this example, the dataset is the [Abalone Data Set](https://archive.ics.uci.edu/ml/datasets/abalone) and the code below performs string indexing, one hot encoding, vector assembly, and combines them into a pipeline to perform these transformations in order. The script then does an 80-20 split to produce training and validation datasets as output." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Couldn't call 'get_role' to get Role ARN from role name workshop-sagemaker to get Role path.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "submit:\n", " {\n", " \"arguments\": [\n", " \"--s3_input_bucket\",\n", " \"sagemaker-eu-west-1-245582572290\",\n", " \"--s3_input_key_prefix\",\n", " \"sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone\",\n", " \"--s3_output_bucket\",\n", " \"sagemaker-eu-west-1-245582572290\",\n", " \"--s3_output_key_prefix\",\n", " \"sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone\"\n", " ],\n", " \"base_job_name\": \"sm-spark\",\n", " \"framework_version\": \"2.4\",\n", " \"instance_count\": 1,\n", " \"instance_type\": \"ml.c4.xlarge\",\n", " \"logs\": true,\n", " \"max_result\": 10,\n", " \"max_runtime_in_seconds\": 1200,\n", " \"name_contains\": \"spark\",\n", " \"role\": \"arn:aws:iam::245582572290:role/workshop-sagemaker\",\n", " \"submit_app\": \"/tmp/tmp-02acb435-c907-4d1c-9e8f-bb0bd5bc5582.py\",\n", " \"wait\": true\n", "}\n", "\n", "Job Name: sm-spark-2020-12-18-19-04-49-672\n", "Inputs: [{'InputName': 'code', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-eu-west-1-245582572290/sm-spark-2020-12-18-19-04-49-672/input/code/tmp-02acb435-c907-4d1c-9e8f-bb0bd5bc5582.py', 'LocalPath': '/opt/ml/processing/input/code', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]\n", "Outputs: []\n", ".............................\u001B[34m/usr/local/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.2) or chardet (3.0.4) doesn't match a supported version!\n", " RequestsDependencyWarning)\u001B[0m\n", "\u001B[34m12-18 19:09 smspark.cli INFO Parsing arguments. argv: ['/usr/local/bin/smspark-submit', '/opt/ml/processing/input/code/tmp-02acb435-c907-4d1c-9e8f-bb0bd5bc5582.py', '--s3_input_bucket', 'sagemaker-eu-west-1-245582572290', '--s3_input_key_prefix', 'sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone', '--s3_output_bucket', 'sagemaker-eu-west-1-245582572290', '--s3_output_key_prefix', 'sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone']\u001B[0m\n", "\u001B[34m12-18 19:09 smspark.cli INFO Raw spark options before processing: {'class_': None, 'jars': None, 'py_files': None, 'files': None, 'verbose': False}\u001B[0m\n", "\u001B[34m12-18 19:09 smspark.cli INFO App and app arguments: ['/opt/ml/processing/input/code/tmp-02acb435-c907-4d1c-9e8f-bb0bd5bc5582.py', '--s3_input_bucket', 'sagemaker-eu-west-1-245582572290', '--s3_input_key_prefix', 'sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone', '--s3_output_bucket', 'sagemaker-eu-west-1-245582572290', '--s3_output_key_prefix', 'sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone']\u001B[0m\n", "\u001B[34m12-18 19:09 smspark.cli INFO Rendered spark options: {'class_': None, 'jars': None, 'py_files': None, 'files': None, 'verbose': False}\u001B[0m\n", "\u001B[34m12-18 19:09 smspark.cli INFO Initializing processing job.\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO {'current_host': 'algo-1', 'hosts': ['algo-1']}\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO {'ProcessingJobArn': 'arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-18-19-04-49-672', 'ProcessingJobName': 'sm-spark-2020-12-18-19-04-49-672', 'AppSpecification': {'ImageUri': '571004829621.dkr.ecr.eu-west-1.amazonaws.com/sagemaker-spark-processing:2.4-cpu', 'ContainerEntrypoint': ['smspark-submit', '/opt/ml/processing/input/code/tmp-02acb435-c907-4d1c-9e8f-bb0bd5bc5582.py'], 'ContainerArguments': ['--s3_input_bucket', 'sagemaker-eu-west-1-245582572290', '--s3_input_key_prefix', 'sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone', '--s3_output_bucket', 'sagemaker-eu-west-1-245582572290', '--s3_output_key_prefix', 'sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone']}, 'ProcessingInputs': [{'InputName': 'code', 'AppManaged': False, 'S3Input': {'LocalPath': '/opt/ml/processing/input/code', 'S3Uri': 's3://sagemaker-eu-west-1-245582572290/sm-spark-2020-12-18-19-04-49-672/input/code/tmp-02acb435-c907-4d1c-9e8f-bb0bd5bc5582.py', 'S3DataDistributionType': 'FullyReplicated', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3CompressionType': 'None', 'S3DownloadMode': 'StartOfJob'}, 'DatasetDefinition': None}], 'ProcessingOutputConfig': {'Outputs': [], 'KmsKeyId': None}, 'ProcessingResources': {'ClusterConfig': {'InstanceCount': 1, 'InstanceType': 'ml.c4.xlarge', 'VolumeSizeInGB': 30, 'VolumeKmsKeyId': None}}, 'RoleArn': 'arn:aws:iam::245582572290:role/workshop-sagemaker', 'StoppingCondition': {'MaxRuntimeInSeconds': 1200}}\u001B[0m\n", "\u001B[34m12-18 19:09 smspark.cli INFO running spark submit command: spark-submit --master yarn --deploy-mode client /opt/ml/processing/input/code/tmp-02acb435-c907-4d1c-9e8f-bb0bd5bc5582.py --s3_input_bucket sagemaker-eu-west-1-245582572290 --s3_input_key_prefix sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone --s3_output_bucket sagemaker-eu-west-1-245582572290 --s3_output_key_prefix sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO waiting for hosts\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO starting status server\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO Status server listening on algo-1:5555\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO bootstrapping cluster\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO transitioning from status INITIALIZING to BOOTSTRAPPING\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO copying aws jars\u001B[0m\n", "\u001B[34mServing on http://algo-1:5555\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO Found hadoop jar hadoop-aws.jar\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO Copying optional jar jets3t-0.9.0.jar from /usr/lib/hadoop/lib to /usr/lib/spark/jars\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO copying cluster config\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO copying /opt/hadoop-config/hdfs-site.xml to /usr/lib/hadoop/etc/hadoop/hdfs-site.xml\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO copying /opt/hadoop-config/core-site.xml to /usr/lib/hadoop/etc/hadoop/core-site.xml\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO copying /opt/hadoop-config/yarn-site.xml to /usr/lib/hadoop/etc/hadoop/yarn-site.xml\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO copying /opt/hadoop-config/spark-defaults.conf to /usr/lib/spark/conf/spark-defaults.conf\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO copying /opt/hadoop-config/spark-env.sh to /usr/lib/spark/conf/spark-env.sh\u001B[0m\n", "\u001B[34m12-18 19:09 root INFO Detected instance type: c4.xlarge with total memory: 7680M and total cores: 4\u001B[0m\n", "\u001B[34m12-18 19:09 root INFO Writing default config to /usr/lib/hadoop/etc/hadoop/yarn-site.xml\u001B[0m\n", "\u001B[34m12-18 19:09 root INFO Configuration at /usr/lib/hadoop/etc/hadoop/yarn-site.xml is: \u001B[0m\n", "\u001B[34m\u001B[0m\n", "\u001B[34m\n", " \n", " \n", " yarn.resourcemanager.hostname\n", " 10.0.68.168\n", " The hostname of the RM.\n", " \n", " \n", " yarn.nodemanager.hostname\n", " algo-1\n", " The hostname of the NM.\n", " \n", " \n", " yarn.nodemanager.webapp.address\n", " algo-1:8042\n", " \n", " \n", " yarn.nodemanager.vmem-pmem-ratio\n", " 5\n", " Ratio between virtual memory to physical memory.\n", " \n", " \n", " yarn.resourcemanager.am.max-attempts\n", " 1\n", " The maximum number of application attempts.\n", " \n", " \n", " yarn.nodemanager.env-whitelist\n", " JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME,AWS_CONTAINER_CREDENTIALS_RELATIVE_URI\n", " Environment variable whitelist\n", " \n", "\n", " \n", " \n", " yarn.scheduler.minimum-allocation-mb\n", " 1\n", " \n", " \n", " yarn.scheduler.maximum-allocation-mb\n", " 7449\n", " \n", " \n", " yarn.scheduler.minimum-allocation-vcores\n", " 1\n", " \n", " \n", " yarn.scheduler.maximum-allocation-vcores\n", " 4\n", " \n", " \n", " yarn.nodemanager.resource.memory-mb\n", " 7449\n", " \n", " \n", " yarn.nodemanager.resource.cpu-vcores\n", " 4\n", " \u001B[0m\n", "\u001B[34m\n", "\u001B[0m\n", "\u001B[34m12-18 19:09 root INFO Writing default config to /usr/lib/spark/conf/spark-defaults.conf\u001B[0m\n", "\u001B[34m12-18 19:09 root INFO Configuration at /usr/lib/spark/conf/spark-defaults.conf is: \u001B[0m\n", "\u001B[34mspark.driver.extraClassPath /usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar\u001B[0m\n", "\u001B[34mspark.driver.extraLibraryPath /usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native\u001B[0m\n", "\u001B[34mspark.executor.extraClassPath /usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar\u001B[0m\n", "\u001B[34mspark.executor.extraLibraryPath /usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native\u001B[0m\n", "\u001B[34mspark.driver.host=10.0.68.168\u001B[0m\n", "\u001B[34mspark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=2\u001B[0m\n", "\u001B[34mspark.driver.memory 2048m\u001B[0m\n", "\u001B[34mspark.driver.memoryOverhead 204m\u001B[0m\n", "\u001B[34mspark.driver.defaultJavaOptions -XX:OnOutOfMemoryError='kill -9 %p' -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled\u001B[0m\n", "\u001B[34mspark.executor.memory 4724m\u001B[0m\n", "\u001B[34mspark.executor.memoryOverhead 472m\u001B[0m\n", "\u001B[34mspark.executor.cores 4\u001B[0m\n", "\u001B[34mspark.executor.defaultJavaOptions -verbose:gc -XX:OnOutOfMemoryError='kill -9 %p' -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseParallelGC -XX:InitiatingHeapOccupancyPercent=70 -XX:ConcGCThreads=1 -XX:ParallelGCThreads=3 \u001B[0m\n", "\u001B[34mspark.executor.instances 1\u001B[0m\n", "\u001B[34mspark.default.parallelism 8\n", "\u001B[0m\n", "\u001B[34m12-18 19:09 root INFO Finished Yarn configuration files setup.\u001B[0m\n", "\u001B[34m12-18 19:09 root INFO No file at /opt/ml/processing/input/conf/configuration.json exists, skipping user configuration\u001B[0m\n", "\u001B[34m20/12/18 19:09:29 INFO namenode.NameNode: STARTUP_MSG: \u001B[0m\n", "\u001B[34m/************************************************************\u001B[0m\n", "\u001B[34mSTARTUP_MSG: Starting NameNode\u001B[0m\n", "\u001B[34mSTARTUP_MSG: host = algo-1/10.0.68.168\u001B[0m\n", "\u001B[34mSTARTUP_MSG: args = [-format, -force]\u001B[0m\n", "\u001B[34mSTARTUP_MSG: version = 2.10.0-amzn-0\u001B[0m\n", "\u001B[34mSTARTUP_MSG: classpath = /usr/lib/hadoop/etc/hadoop:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/httpcore-4.4.11.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/httpclient-4.5.9.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jsch-0.1.54.jar:/usr/lib/hadoop/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/commons-lang3-3.4.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/json-smart-1.3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.25.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/avro-1.7.7.jar:/usr/lib/hadoop/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/commons-compress-1.19.jar:/usr/lib/hadoop/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-rumen.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop/.//hadoop-yarn-registry.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//hadoop-aliyun.jar:/usr/lib/hadoop/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix.jar:/usr/lib/hadoop/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop/.//hadoop-aws.jar:/usr/lib/hadoop/.//hadoop-archives.jar:/usr/lib/hadoop/.//hadoop-azure-datalake.jar:/usr/lib/hadoop/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-api.jar:/usr/lib/hadoop/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp.jar:/usr/lib/hadoop/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-azure.jar:/usr/lib/hadoop/.//hadoop-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-ant.jar:/usr/lib/hadoop/.//hadoop-sls.jar:/usr/lib/hadoop/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop/.//hadoop-extras.jar:/usr/lib/hadoop/.//hadoop-annotations-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-datajoin.jar:/usr/lib/hadoop/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop/.//hadoop-resourceestimator.jar:/usr/lib/hadoop/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-streaming.jar:/usr/lib/hadoop/.//hadoop-yarn-common.jar:/usr/lib/hadoop/.//hadoop-sls-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jackson-core-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-1.4.01.jar:/usr/lib/hadoop-hdfs/lib/jackson-databind-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/xercesImpl-2.12.0.jar:/usr/lib/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-hdfs/lib/jackson-annotations-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-yarn/lib/httpcore-4.4.11.jar:/usr/lib/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/httpclient-4.5.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/lib/hadoop-yarn/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-net-3.1.jar:/usr/lib/hadoop-yarn/lib/json-smart-1.3.1.jar:/usr/lib/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/lib/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/lib/hadoop-yarn/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-yarn/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/lib/hadoop-yarn/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/gson-2.2.4.jar:/usr/lib/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/lib/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-yarn/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop-yarn/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.7.7.jar:/usr/lib/hadoop-yarn/lib/fst-2.50.jar:/usr/lib/hadoop-yarn/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.19.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.4.11.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//java-util-1.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.5.9.jar:/usr/lib/hadoop-mapreduce/.//azure-keyvault-core-0.8.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.54.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-core-3.4.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/lib/hadoop-mapreduce/.//spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-sts-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2\u001B[0m\n", "\u001B[34m.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ecs-4.2.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-mapreduce/.//azure-storage-5.4.0.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.14.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//audience-annotations-0.5.0.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//jersey-client-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang3-3.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-mapreduce/.//aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//json-smart-1.3.1.jar:/usr/lib/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.852.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-mapreduce/.//aliyun-sdk-oss-3.4.1.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-mapreduce/.//stax2-api-3.1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.9.4.jar:/usr/lib/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//json-io-2.5.1.jar:/usr/lib/hadoop-mapreduce/.//netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//jdom-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//woodstox-core-5.0.3.jar:/usr/lib/hadoop-mapreduce/.//jcip-annotations-1.0-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ram-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/.//fst-2.50.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/.//jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//javax.inject-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/lib/hadoop-mapreduce/.//ehcache-3.3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-3.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.10.0-amzn-0.jar\u001B[0m\n", "\u001B[34mSTARTUP_MSG: build = git@aws157git.com:/pkg/Aws157BigTop -r d1e860a34cc1aea3d600c57c5c0270ea41579e8c; compiled by 'ec2-user' on 2020-09-19T02:05Z\u001B[0m\n", "\u001B[34mSTARTUP_MSG: java = 1.8.0_265\u001B[0m\n", "\u001B[34m************************************************************/\u001B[0m\n", "\u001B[34m20/12/18 19:09:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]\u001B[0m\n", "\u001B[34m20/12/18 19:09:29 INFO namenode.NameNode: createNameNode [-format, -force]\u001B[0m\n", "\u001B[34mFormatting using clusterid: CID-e5ed05aa-d91e-4ee1-bc45-a4102c300e93\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSEditLog: Edit logging is async:true\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: KeyProvider: null\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: fsLock is fair: true\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: supergroup = supergroup\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: isPermissionEnabled = true\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: HA Enabled: false\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Dec 18 19:09:30\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: Computing capacity for map BlocksMap\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: VM type = 64-bit\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: capacity = 2^21 = 2097152 entries\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: defaultReplication = 3\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: maxReplication = 512\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: minReplication = 1\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: maxReplicationStreams = 2\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: encryptDataTransfer = false\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: Append Enabled: true\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: Computing capacity for map INodeMap\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: VM type = 64-bit\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: capacity = 2^20 = 1048576 entries\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSDirectory: ACLs enabled? false\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSDirectory: XAttrs enabled? true\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.NameNode: Caching file names occurring more than 10 times\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: Computing capacity for map cachedBlocks\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: VM type = 64-bit\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: capacity = 2^18 = 262144 entries\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: Retry cache on namenode is enabled\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: Computing capacity for map NameNodeRetryCache\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: VM type = 64-bit\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO util.GSet: capacity = 2^15 = 32768 entries\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1734687384-10.0.68.168-1608318570285\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO common.Storage: Storage directory /opt/amazon/hadoop/hdfs/namenode has been successfully formatted.\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/amazon/hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 using no compression\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSImageFormatProtobuf: Image file /opt/amazon/hadoop/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds .\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid = 0 when meet shutdown.\u001B[0m\n", "\u001B[34m20/12/18 19:09:30 INFO namenode.NameNode: SHUTDOWN_MSG: \u001B[0m\n", "\u001B[34m/************************************************************\u001B[0m\n", "\u001B[34mSHUTDOWN_MSG: Shutting down NameNode at algo-1/10.0.68.168\u001B[0m\n", "\u001B[34m************************************************************/\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO waiting for cluster to be up\u001B[0m\n", "\u001B[34m20/12/18 19:09:31 INFO datanode.DataNode: STARTUP_MSG: \u001B[0m\n", "\u001B[34m/************************************************************\u001B[0m\n", "\u001B[34mSTARTUP_MSG: Starting DataNode\u001B[0m\n", "\u001B[34mSTARTUP_MSG: host = algo-1/10.0.68.168\u001B[0m\n", "\u001B[34mSTARTUP_MSG: args = []\u001B[0m\n", "\u001B[34mSTARTUP_MSG: version = 2.10.0-amzn-0\u001B[0m\n", "\u001B[34mSTARTUP_MSG: classpath = /usr/lib/hadoop/etc/hadoop:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/httpcore-4.4.11.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/httpclient-4.5.9.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jsch-0.1.54.jar:/usr/lib/hadoop/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/commons-lang3-3.4.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/json-smart-1.3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.25.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/avro-1.7.7.jar:/usr/lib/hadoop/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/commons-compress-1.19.jar:/usr/lib/hadoop/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-rumen.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop/.//hadoop-yarn-registry.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//hadoop-aliyun.jar:/usr/lib/hadoop/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix.jar:/usr/lib/hadoop/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop/.//hadoop-aws.jar:/usr/lib/hadoop/.//hadoop-archives.jar:/usr/lib/hadoop/.//hadoop-azure-datalake.jar:/usr/lib/hadoop/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-api.jar:/usr/lib/hadoop/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp.jar:/usr/lib/hadoop/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-azure.jar:/usr/lib/hadoop/.//hadoop-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-ant.jar:/usr/lib/hadoop/.//hadoop-sls.jar:/usr/lib/hadoop/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop/.//hadoop-extras.jar:/usr/lib/hadoop/.//hadoop-annotations-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-datajoin.jar:/usr/lib/hadoop/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop/.//hadoop-resourceestimator.jar:/usr/lib/hadoop/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-streaming.jar:/usr/lib/hadoop/.//hadoop-yarn-common.jar:/usr/lib/hadoop/.//hadoop-sls-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jackson-core-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-1.4.01.jar:/usr/lib/hadoop-hdfs/lib/jackson-databind-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/xercesImpl-2.12.0.jar:/usr/lib/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-hdfs/lib/jackson-annotations-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-yarn/lib/httpcore-4.4.11.jar:/usr/lib/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/httpclient-4.5.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/lib/hadoop-yarn/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-net-3.1.jar:/usr/lib/hadoop-yarn/lib/json-smart-1.3.1.jar:/usr/lib/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/lib/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/lib/hadoop-yarn/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-yarn/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/lib/hadoop-yarn/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/gson-2.2.4.jar:/usr/lib/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/lib/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-yarn/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop-yarn/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.7.7.jar:/usr/lib/hadoop-yarn/lib/fst-2.50.jar:/usr/lib/hadoop-yarn/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.19.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.4.11.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//java-util-1.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.5.9.jar:/usr/lib/hadoop-mapreduce/.//azure-keyvault-core-0.8.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.54.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-core-3.4.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/lib/hadoop-mapreduce/.//spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-sts-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2\u001B[0m\n", "\u001B[34m.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ecs-4.2.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-mapreduce/.//azure-storage-5.4.0.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.14.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//audience-annotations-0.5.0.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//jersey-client-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang3-3.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-mapreduce/.//aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//json-smart-1.3.1.jar:/usr/lib/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.852.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-mapreduce/.//aliyun-sdk-oss-3.4.1.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-mapreduce/.//stax2-api-3.1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.9.4.jar:/usr/lib/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//json-io-2.5.1.jar:/usr/lib/hadoop-mapreduce/.//netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//jdom-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//woodstox-core-5.0.3.jar:/usr/lib/hadoop-mapreduce/.//jcip-annotations-1.0-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ram-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/.//fst-2.50.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/.//jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//javax.inject-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/lib/hadoop-mapreduce/.//ehcache-3.3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-3.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.10.0-amzn-0.jar\u001B[0m\n", "\u001B[34mSTARTUP_MSG: build = git@aws157git.com:/pkg/Aws157BigTop -r d1e860a34cc1aea3d600c57c5c0270ea41579e8c; compiled by 'ec2-user' on 2020-09-19T02:05Z\u001B[0m\n", "\u001B[34mSTARTUP_MSG: java = 1.8.0_265\u001B[0m\n", "\u001B[34m************************************************************/\u001B[0m\n", "\u001B[34m20/12/18 19:09:31 INFO datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]\u001B[0m\n", "\u001B[34m20/12/18 19:09:31 INFO resourcemanager.ResourceManager: STARTUP_MSG: \u001B[0m\n", "\u001B[34m/************************************************************\u001B[0m\n", "\u001B[34mSTARTUP_MSG: Starting ResourceManager\u001B[0m\n", "\u001B[34mSTARTUP_MSG: host = algo-1/10.0.68.168\u001B[0m\n", "\u001B[34mSTARTUP_MSG: args = []\u001B[0m\n", "\u001B[34mSTARTUP_MSG: version = 2.10.0-amzn-0\u001B[0m\n", "\u001B[34mSTARTUP_MSG: classpath = /usr/lib/hadoop/etc/hadoop:/usr/lib/hadoop/etc/hadoop:/usr/lib/hadoop/etc/hadoop:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/httpcore-4.4.11.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/httpclient-4.5.9.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jsch-0.1.54.jar:/usr/lib/hadoop/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/commons-lang3-3.4.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/json-smart-1.3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.25.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/avro-1.7.7.jar:/usr/lib/hadoop/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/commons-compress-1.19.jar:/usr/lib/hadoop/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-rumen.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop/.//hadoop-yarn-registry.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//hadoop-aliyun.jar:/usr/lib/hadoop/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix.jar:/usr/lib/hadoop/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop/.//hadoop-aws.jar:/usr/lib/hadoop/.//hadoop-archives.jar:/usr/lib/hadoop/.//hadoop-azure-datalake.jar:/usr/lib/hadoop/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-api.jar:/usr/lib/hadoop/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp.jar:/usr/lib/hadoop/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-azure.jar:/usr/lib/hadoop/.//hadoop-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-ant.jar:/usr/lib/hadoop/.//hadoop-sls.jar:/usr/lib/hadoop/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop/.//hadoop-extras.jar:/usr/lib/hadoop/.//hadoop-annotations-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-datajoin.jar:/usr/lib/hadoop/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop/.//hadoop-resourceestimator.jar:/usr/lib/hadoop/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-streaming.jar:/usr/lib/hadoop/.//hadoop-yarn-common.jar:/usr/lib/hadoop/.//hadoop-sls-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jackson-core-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-1.4.01.jar:/usr/lib/hadoop-hdfs/lib/jackson-databind-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/xercesImpl-2.12.0.jar:/usr/lib/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-hdfs/lib/jackson-annotations-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-yarn/lib/httpcore-4.4.11.jar:/usr/lib/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/httpclient-4.5.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/lib/hadoop-yarn/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-net-3.1.jar:/usr/lib/hadoop-yarn/lib/json-smart-1.3.1.jar:/usr/lib/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/lib/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/lib/hadoop-yarn/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-yarn/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/lib/hadoop-yarn/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/gson-2.2.4.jar:/usr/lib/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/lib/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-yarn/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop-yarn/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.7.7.jar:/usr/lib/hadoop-yarn/lib/fst-2.50.jar:/usr/lib/hadoop-yarn/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.19.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.4.11.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//java-util-1.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.5.9.jar:/usr/lib/hadoop-mapreduce/.//azure-keyvault-core-0.8.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.54.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-core-3.4.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/lib/hadoop-mapreduce/.//spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-sts-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr\u001B[0m\n", "\u001B[34m/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ecs-4.2.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-mapreduce/.//azure-storage-5.4.0.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.14.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//audience-annotations-0.5.0.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//jersey-client-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang3-3.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-mapreduce/.//aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//json-smart-1.3.1.jar:/usr/lib/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.852.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-mapreduce/.//aliyun-sdk-oss-3.4.1.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-mapreduce/.//stax2-api-3.1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.9.4.jar:/usr/lib/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//json-io-2.5.1.jar:/usr/lib/hadoop-mapreduce/.//netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//jdom-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//woodstox-core-5.0.3.jar:/usr/lib/hadoop-mapreduce/.//jcip-annotations-1.0-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ram-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/.//fst-2.50.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/.//jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//javax.inject-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/lib/hadoop-mapreduce/.//ehcache-3.3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-3.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-yarn/lib/httpcore-4.4.11.jar:/usr/lib/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/httpclient-4.5.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/lib/hadoop-yarn/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-net-3.1.jar:/usr/lib/hadoop-yarn/lib/json-smart-1.3.1.jar:/usr/lib/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/lib/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/lib/hadoop-yarn/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-yarn/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/lib/hadoop-yarn/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/gson-2.2.4.jar:/usr/lib/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/lib/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-yarn/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop-yarn/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.7.7.jar:/usr/lib/hadoop-yarn/lib/fst-2.50.jar:/usr/lib/hadoop-yarn/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.19.jar:/usr/lib/hadoop/etc/hadoop/rm-config/log4j.properties:/usr/lib/hadoop-yarn/.//timelineservice/hadoop-yarn-server-timelineservice-hbase-coprocessor-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//timelineservice/hadoop-yarn-server-timelineservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//timelineservice/hadoop-yarn-server-timelineservice-hbase-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//timelineservice/hadoop-yarn-server-timelineservice-hbase-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/jcodings-1.0.8.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/metrics-core-2.2.0.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/jackson-core-2.6.7.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/commons-csv-1.0.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/htrace-core-3.1.0-incubating.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/netty-all-4.0.23.Final.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/hbase-protocol-1.2.6.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/hbase-common-1.2.6.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/joni-2.1.2.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/jsr311-api-1.1.1.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/hbase-annotations-1.2.6.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/hbase-client-1.2.6.jar\u001B[0m\n", "\u001B[34mSTARTUP_MSG: build = git@aws157git.com:/pkg/Aws157BigTop -r d1e860a34cc1aea3d600c57c5c0270ea41579e8c; compiled by 'ec2-user' on 2020-09-19T02:05Z\u001B[0m\n", "\u001B[34mSTARTUP_MSG: java = 1.8.0_265\u001B[0m\n", "\u001B[34m************************************************************/\u001B[0m\n", "\u001B[34m20/12/18 19:09:31 INFO resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT]\u001B[0m\n", "\u001B[34m20/12/18 19:09:31 INFO nodemanager.NodeManager: STARTUP_MSG: \u001B[0m\n", "\u001B[34m/************************************************************\u001B[0m\n", "\u001B[34mSTARTUP_MSG: Starting NodeManager\u001B[0m\n", "\u001B[34mSTARTUP_MSG: host = algo-1/10.0.68.168\u001B[0m\n", "\u001B[34mSTARTUP_MSG: args = []\u001B[0m\n", "\u001B[34mSTARTUP_MSG: version = 2.10.0-amzn-0\u001B[0m\n", "\u001B[34mSTARTUP_MSG: classpath = /usr/lib/hadoop/etc/hadoop:/usr/lib/hadoop/etc/hadoop:/usr/lib/hadoop/etc/hadoop:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/httpcore-4.4.11.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/httpclient-4.5.9.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jsch-0.1.54.jar:/usr/lib/hadoop/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/commons-lang3-3.4.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/json-smart-1.3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.25.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/avro-1.7.7.jar:/usr/lib/hadoop/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/commons-compress-1.19.jar:/usr/lib/hadoop/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-rumen.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop/.//hadoop-yarn-registry.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//hadoop-aliyun.jar:/usr/lib/hadoop/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix.jar:/usr/lib/hadoop/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop/.//hadoop-aws.jar:/usr/lib/hadoop/.//hadoop-archives.jar:/usr/lib/hadoop/.//hadoop-azure-datalake.jar:/usr/lib/hadoop/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-api.jar:/usr/lib/hadoop/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp.jar:/usr/lib/hadoop/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-azure.jar:/usr/lib/hadoop/.//hadoop-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-ant.jar:/usr/lib/hadoop/.//hadoop-sls.jar:/usr/lib/hadoop/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop/.//hadoop-extras.jar:/usr/lib/hadoop/.//hadoop-annotations-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-datajoin.jar:/usr/lib/hadoop/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop/.//hadoop-resourceestimator.jar:/usr/lib/hadoop/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-streaming.jar:/usr/lib/hadoop/.//hadoop-yarn-common.jar:/usr/lib/hadoop/.//hadoop-sls-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jackson-core-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-1.4.01.jar:/usr/lib/hadoop-hdfs/lib/jackson-databind-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/xercesImpl-2.12.0.jar:/usr/lib/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-hdfs/lib/jackson-annotations-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-yarn/lib/httpcore-4.4.11.jar:/usr/lib/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/httpclient-4.5.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/lib/hadoop-yarn/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-net-3.1.jar:/usr/lib/hadoop-yarn/lib/json-smart-1.3.1.jar:/usr/lib/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/lib/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/lib/hadoop-yarn/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-yarn/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/lib/hadoop-yarn/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/gson-2.2.4.jar:/usr/lib/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/lib/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-yarn/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop-yarn/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.7.7.jar:/usr/lib/hadoop-yarn/lib/fst-2.50.jar:/usr/lib/hadoop-yarn/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.19.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.4.11.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//java-util-1.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.5.9.jar:/usr/lib/hadoop-mapreduce/.//azure-keyvault-core-0.8.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.54.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-core-3.4.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/lib/hadoop-mapreduce/.//spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-m20/12/18 19:09:31 INFO namenode.NameNode: STARTUP_MSG: \u001B[0m\n", "\u001B[34m/************************************************************\u001B[0m\n", "\u001B[34mSTARTUP_MSG: Starting NameNode\u001B[0m\n", "\u001B[34mSTARTUP_MSG: host = algo-1/10.0.68.168\u001B[0m\n", "\u001B[34mSTARTUP_MSG: args = []\u001B[0m\n", "\u001B[34mSTARTUP_MSG: version = 2.10.0-amzn-0\u001B[0m\n", "\u001B[34mSTARTUP_MSG: classpath = /usr/lib/hadoop/etc/hadoop:/usr/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/lib/httpcore-4.4.11.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop/lib/httpclient-4.5.9.jar:/usr/lib/hadoop/lib/commons-io-2.4.jar:/usr/lib/hadoop/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/lib/jsch-0.1.54.jar:/usr/lib/hadoop/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop/lib/jersey-core-1.9.jar:/usr/lib/hadoop/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-lang-2.6.jar:/usr/lib/hadoop/lib/commons-lang3-3.4.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/jersey-json-1.9.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/json-smart-1.3.1.jar:/usr/lib/hadoop/lib/curator-client-2.7.1.jar:/usr/lib/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop/lib/jersey-server-1.9.jar:/usr/lib/hadoop/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop/lib/gson-2.2.4.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop/lib/slf4j-api-1.7.25.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/avro-1.7.7.jar:/usr/lib/hadoop/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/lib/hadoop/lib/jsr305-3.0.0.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/junit-4.11.jar:/usr/lib/hadoop/lib/commons-compress-1.19.jar:/usr/lib/hadoop/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-rumen.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop/.//hadoop-yarn-registry.jar:/usr/lib/hadoop/.//hadoop-nfs.jar:/usr/lib/hadoop/.//hadoop-aliyun.jar:/usr/lib/hadoop/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix.jar:/usr/lib/hadoop/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop/.//hadoop-aws.jar:/usr/lib/hadoop/.//hadoop-archives.jar:/usr/lib/hadoop/.//hadoop-azure-datalake.jar:/usr/lib/hadoop/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-api.jar:/usr/lib/hadoop/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp.jar:/usr/lib/hadoop/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-azure.jar:/usr/lib/hadoop/.//hadoop-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-ant.jar:/usr/lib/hadoop/.//hadoop-sls.jar:/usr/lib/hadoop/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop/.//hadoop-extras.jar:/usr/lib/hadoop/.//hadoop-annotations-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-datajoin.jar:/usr/lib/hadoop/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-openstack.jar:/usr/lib/hadoop/.//hadoop-common-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop/.//hadoop-resourceestimator.jar:/usr/lib/hadoop/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop/.//hadoop-streaming.jar:/usr/lib/hadoop/.//hadoop-yarn-common.jar:/usr/lib/hadoop/.//hadoop-sls-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/jackson-core-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/xml-apis-1.4.01.jar:/usr/lib/hadoop-hdfs/lib/jackson-databind-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/xercesImpl-2.12.0.jar:/usr/lib/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-hdfs/lib/jackson-annotations-2.6.7.jar:/usr/lib/hadoop-hdfs/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-native-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-rbf-2.10.0-amzn-0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-yarn/lib/httpcore-4.4.11.jar:/usr/lib/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/httpclient-4.5.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/lib/hadoop-yarn/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-net-3.1.jar:/usr/lib/hadoop-yarn/lib/json-smart-1.3.1.jar:/usr/lib/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/lib/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/lib/hadoop-yarn/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-yarn/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/lib/hadoop-yarn/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/gson-2.2.4.jar:/usr/lib/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/lib/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-yarn/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop-yarn/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.7.7.jar:/usr/lib/hadoop-yarn/lib/fst-2.50.jar:/usr/lib/hadoop-yarn/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.19.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.11.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.4.11.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//java-util-1.9.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/lib/hadoop-mapreduce/.//jersey-guice-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.5.9.jar:/usr/lib/hadoop-mapreduce/.//azure-keyvault-core-0.8.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//leveldbjni-all-1.8.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.54.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-core-3.4.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/lib/hadoop-mapreduce/.//spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-sts-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2\u001B[0m\n", "\u001B[34m.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ecs-4.2.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-mapreduce/.//azure-storage-5.4.0.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.14.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//audience-annotations-0.5.0.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//jersey-client-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang3-3.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-mapreduce/.//aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//json-smart-1.3.1.jar:/usr/lib/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.852.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-mapreduce/.//aliyun-sdk-oss-3.4.1.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-mapreduce/.//stax2-api-3.1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.9.4.jar:/usr/lib/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//json-io-2.5.1.jar:/usr/lib/hadoop-mapreduce/.//netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//jdom-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//woodstox-core-5.0.3.jar:/usr/lib/hadoop-mapreduce/.//jcip-annotations-1.0-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ram-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/.//fst-2.50.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/.//jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//javax.inject-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/lib/hadoop-mapreduce/.//ehcache-3.3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-3.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.10.0-amzn-0.jar\u001B[0m\n", "\u001B[34mSTARTUP_MSG: build = git@aws157git.com:/pkg/Aws157BigTop -r d1e860a34cc1aea3d600c57c5c0270ea41579e8c; compiled by 'ec2-user' on 2020-09-19T02:05Z\u001B[0m\n", "\u001B[34mSTARTUP_MSG: java = 1.8.0_265\u001B[0m\n", "\u001B[34m************************************************************/\u001B[0m\n", "\u001B[34mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-sts-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ecs-4.2.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0-tests.jar:/usr/lib/hadoop-mapreduce/.//azure-storage-5.4.0.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.14.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives.jar:/usr/lib/hadoop-mapreduce/.//nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/lib/hadoop-mapreduce/.//jackson-databind-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//audience-annotations-0.5.0.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/lib/hadoop-mapreduce/.//jersey-client-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-datalake-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-lang3-3.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure.jar:/usr/lib/hadoop-mapreduce/.//geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-mapreduce/.//aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//json-smart-1.3.1.jar:/usr/lib/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.852.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-mapreduce/.//aliyun-sdk-oss-3.4.1.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-mapreduce/.//stax2-api-3.1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.9.4.jar:/usr/lib/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.3.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jackson-annotations-2.6.7.jar:/usr/lib/hadoop-mapreduce/.//json-io-2.5.1.jar:/usr/lib/hadoop-mapreduce/.//netty-3.10.6.Final.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//hadoop-ant.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-mapreduce/.//jdom-1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archive-logs-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.1.7.3.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//woodstox-core-5.0.3.jar:/usr/lib/hadoop-mapreduce/.//jcip-annotations-1.0-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//aliyun-java-sdk-ram-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aliyun-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//avro-1.7.7.jar:/usr/lib/hadoop-mapreduce/.//fst-2.50.jar:/usr/lib/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/lib/hadoop-mapreduce/.//jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-mapreduce/.//javax.inject-1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/lib/hadoop-mapreduce/.//ehcache-3.3.1.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//guice-3.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.19.jar:/usr/lib/hadoop-mapreduce/.//hadoop-yarn-common.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-registry-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/lib/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop-yarn/lib/httpcore-4.4.11.jar:/usr/lib/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/httpclient-4.5.9.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/jsch-0.1.54.jar:/usr/lib/hadoop-yarn/lib/spotbugs-annotations-3.1.9.jar:/usr/lib/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.14.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/nimbus-jose-jwt-4.41.1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/audience-annotations-0.5.0.jar:/usr/lib/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/commons-lang3-3.4.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/commons-net-3.1.jar:/usr/lib/hadoop-yarn/lib/json-smart-1.3.1.jar:/usr/lib/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/lib/hadoop-yarn/lib/metrics-core-3.0.1.jar:/usr/lib/hadoop-yarn/lib/htrace-core4-4.1.0-incubating.jar:/usr/lib/hadoop-yarn/lib/stax2-api-3.1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/lib/hadoop-yarn/lib/commons-beanutils-1.9.4.jar:/usr/lib/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/lib/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/lib/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/lib/hadoop-yarn/lib/netty-3.10.6.Final.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/lib/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/lib/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.1.7.3.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/gson-2.2.4.jar:/usr/lib/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/lib/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/lib/hadoop-yarn/lib/woodstox-core-5.0.3.jar:/usr/lib/hadoop-yarn/lib/jcip-annotations-1.0-1.jar:/usr/lib/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.7.7.jar:/usr/lib/hadoop-yarn/lib/fst-2.50.jar:/usr/lib/hadoop-yarn/lib/jetty-sslengine-6.1.26-emr.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/lib/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.19.jar:/usr/lib/hadoop/etc/hadoop/nm-config/log4j.properties:/usr/lib/hadoop-yarn/.//timelineservice/hadoop-yarn-server-timelineservice-hbase-coprocessor-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//timelineservice/hadoop-yarn-server-timelineservice-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//timelineservice/hadoop-yarn-server-timelineservice-hbase-client-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//timelineservice/hadoop-yarn-server-timelineservice-hbase-common-2.10.0-amzn-0.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/jcodings-1.0.8.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/metrics-core-2.2.0.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/jackson-core-2.6.7.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/commons-csv-1.0.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/htrace-core-3.1.0-incubating.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/netty-all-4.0.23.Final.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/hbase-protocol-1.2.6.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/hbase-common-1.2.6.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/joni-2.1.2.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/jsr311-api-1.1.1.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/hbase-annotations-1.2.6.jar:/usr/lib/hadoop-yarn/.//timelineservice/lib/hbase-client-1.2.6.jar\u001B[0m\n", "\u001B[34mSTARTUP_MSG: build = git@aws157git.com:/pkg/Aws157BigTop -r d1e860a34cc1aea3d600c57c5c0270ea41579e8c; compiled by 'ec2-user' on 2020-09-19T02:05Z\u001B[0m\n", "\u001B[34mSTARTUP_MSG: java = 1.8.0_265\u001B[0m\n", "\u001B[34m************************************************************/\u001B[0m\n", "\u001B[34m20/12/18 19:09:31 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]\u001B[0m\n", "\u001B[34m20/12/18 19:09:31 INFO nodemanager.NodeManager: registered UNIX signal handlers for [TERM, HUP, INT]\u001B[0m\n", "\u001B[34m20/12/18 19:09:32 INFO namenode.NameNode: createNameNode []\u001B[0m\n", "\u001B[34m20/12/18 19:09:32 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties\u001B[0m\n", "\u001B[34m20/12/18 19:09:32 INFO conf.Configuration: found resource core-site.xml at file:/etc/hadoop/conf.empty/core-site.xml\u001B[0m\n", "\u001B[34m20/12/18 19:09:32 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).\u001B[0m\n", "\u001B[34m20/12/18 19:09:32 INFO impl.MetricsSystemImpl: NameNode metrics system started\u001B[0m\n", "\u001B[34m20/12/18 19:09:32 INFO security.Groups: clearing userToGroupsMap cache\u001B[0m\n", "\u001B[34m20/12/18 19:09:32 INFO namenode.NameNode: fs.defaultFS is hdfs://10.0.68.168/\u001B[0m\n", "\u001B[34m20/12/18 19:09:32 INFO conf.Configuration: resource-types.xml not found\u001B[0m\n", "\u001B[34m20/12/18 19:09:32 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/opt/amazon/hadoop/hdfs/datanode/\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO conf.Configuration: found resource yarn-site.xml at file:/etc/hadoop/conf.empty/yarn-site.xml\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO security.NMTokenSecretManagerInRM: NMTokenKeyRollingInterval: 86400000ms and NMTokenKeyActivationDelay: 900000ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO security.RMContainerTokenSecretManager: ContainerTokenKeyRollingInterval: 86400000ms and ContainerTokenKeyActivationDelay: 900000ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO security.AMRMTokenSecretManager: AMRMTokenKeyRollingInterval: 86400000ms and AMRMTokenKeyActivationDelay: 900000 ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO util.JvmPauseMonitor: Starting JVM pause monitor\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO nodemanager.NodeManager: Node Manager health check script is not available or doesn't have execute permission, so not starting the node health script runner.\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStoreEventType for class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.NodesListManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.NodesListManager\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO resourcemanager.ResourceManager: Using Scheduler: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$LocalizationEventHandlerWrapper\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerSchedulerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler.ContainerScheduler\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.ContainerManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.NodeManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.NodeManager\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType for class org.apache.hadoop.yarn.event.EventDispatcher\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$NodeEventDispatcher\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO impl.MetricsSystemImpl: DataNode metrics system started\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO impl.MetricsSystemImpl: NodeManager metrics system started\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO nodemanager.DirectoryCollection: Disk Validator: yarn.nodemanager.disk-validator is loaded.\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO nodemanager.DirectoryCollection: Disk Validator: yarn.nodemanager.disk-validator is loaded.\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO impl.MetricsSystemImpl: ResourceManager metrics system started\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO security.YarnAuthorizationProvider: org.apache.hadoop.yarn.security.ConfiguredYarnAuthorizer is instantiated.\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMAppManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.RMAppManager\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEventType for class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO resourcemanager.RMNMInfo: Registered RMNMInfo MBean\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO nodemanager.NodeResourceMonitorImpl: Using ResourceCalculatorPlugin : org.apache.hadoop.yarn.util.ResourceCalculatorPlugin@5149d738\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO monitor.RMAppLifetimeMonitor: Application lifelime monitor interval set to 3000 ms.\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO containermanager.ContainerManagerImpl: AMRMProxyService is disabled\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO localizer.ResourceLocalizationService: per directory file limit = 8192\u001B[0m\n", "\u001B[34m20/12/18 19:09:33 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Jetty bound to port 50070\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO mortbay.log: jetty-6.1.26-emr\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO conf.Configuration: found resource capacity-scheduler.xml at file:/etc/hadoop/conf.empty/capacity-scheduler.xml\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO localizer.ResourceLocalizationService: Disk Validator: yarn.nodemanager.disk-validator is loaded.\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO scheduler.AbstractYarnScheduler: Minimum allocation = \u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO scheduler.AbstractYarnScheduler: Maximum allocation = \u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO monitor.ContainersMonitorImpl: Using ResourceCalculatorPlugin : org.apache.hadoop.yarn.util.ResourceCalculatorPlugin@5e2c3d18\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO monitor.ContainersMonitorImpl: Using ResourceCalculatorProcessTree : null\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO monitor.ContainersMonitorImpl: Physical memory check enabled: true\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO monitor.ContainersMonitorImpl: Virtual memory check enabled: true\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO monitor.ContainersMonitorImpl: ContainersMonitor enabled: true\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 WARN monitor.ContainersMonitorImpl: NodeManager configured with 7.3 G physical memory allocated to containers, which is more than 80% of the total physical memory available (7.3 G). Thrashing might happen.\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO containermanager.ContainerManagerImpl: Not a recoverable state store. Nothing to recover.\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO datanode.DataNode: Configured hostname is algo-1\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 WARN conf.Configuration: No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO datanode.DataNode: Starting DataNode with maxLockedMemory = 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO conf.Configuration: resource-types.xml not found\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO conf.Configuration: node-resources.xml not found\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Unable to find 'node-resources.xml'.\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root is undefined\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO datanode.DataNode: Opened streaming server at /0.0.0.0:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO nodemanager.NodeStatusUpdaterImpl: Nodemanager resources is set to: \u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO datanode.DataNode: Balancing bandwidth is 10485760 bytes/s\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO datanode.DataNode: Number threads for balancing is 50\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO nodemanager.NodeStatusUpdaterImpl: Initialized nodemanager with : physical-memory=7449 virtual-memory=37245 virtual-cores=4\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.ParentQueue: root, capacity=1.0, absoluteCapacity=1.0, maxCapacity=1.0, absoluteMaxCapacity=1.0, state=RUNNING, acls=SUBMIT_APP:*ADMINISTER_QUEUE:*, labels=*,\u001B[0m\n", "\u001B[34m, reservationsContinueLooking=true, orderingPolicy=utilization, priority=0\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root.default is undefined\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.LeafQueue: Initializing default\u001B[0m\n", "\u001B[34mcapacity = 1.0 [= (float) configuredCapacity / 100 ]\u001B[0m\n", "\u001B[34mabsoluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ]\u001B[0m\n", "\u001B[34mmaxCapacity = 1.0 [= configuredMaxCapacity ]\u001B[0m\n", "\u001B[34mabsoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ]\u001B[0m\n", "\u001B[34muserLimit = 100 [= configuredUserLimit ]\u001B[0m\n", "\u001B[34muserLimitFactor = 1.0 [= configuredUserLimitFactor ]\u001B[0m\n", "\u001B[34mmaxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)]\u001B[0m\n", "\u001B[34mmaxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ]\u001B[0m\n", "\u001B[34musedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)]\u001B[0m\n", "\u001B[34mabsoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory]\u001B[0m\n", "\u001B[34mmaxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ]\u001B[0m\n", "\u001B[34mminimumAllocationFactor = 0.9998658 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ]\u001B[0m\n", "\u001B[34mmaximumAllocation = [= configuredMaxAllocation ]\u001B[0m\n", "\u001B[34mnumContainers = 0 [= currentNumContainers ]\u001B[0m\n", "\u001B[34mstate = RUNNING [= configuredState ]\u001B[0m\n", "\u001B[34macls = SUBMIT_APP:*ADMINISTER_QUEUE:* [= configuredAcls ]\u001B[0m\n", "\u001B[34mnodeLocalityDelay = 40\u001B[0m\n", "\u001B[34mrackLocalityAdditionalDelay = -1\u001B[0m\n", "\u001B[34mlabels=*,\u001B[0m\n", "\u001B[34mreservationsContinueLooking = true\u001B[0m\n", "\u001B[34mpreemptionDisabled = true\u001B[0m\n", "\u001B[34mdefaultAppPriorityPerQueue = 0\u001B[0m\n", "\u001B[34mpriority = 0\u001B[0m\n", "\u001B[34mmaxLifetime = -1 seconds\u001B[0m\n", "\u001B[34mdefaultLifetime = -1 seconds\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.CapacitySchedulerQueueManager: Initialized queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.CapacitySchedulerQueueManager: Initialized queue: root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=usedCapacity=0.0, numApps=0, numContainers=0\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.CapacitySchedulerQueueManager: Initialized root queue root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=usedCapacity=0.0, numApps=0, numContainers=0\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO placement.UserGroupMappingPlacementRule: Initialized queue mappings, override: false\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.WorkflowPriorityMappingsManager: Initialized workflow priority mappings, override: false\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, minimumAllocation=<>, maximumAllocation=<>, asynchronousScheduling=false, asyncScheduleInterval=5ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO conf.Configuration: dynamic-resources.xml not found\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resourcemanager.AMSProcessingChain: Initializing AMS Processing chain. Root Processor=[org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor].\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 2000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO resourcemanager.ResourceManager: TimelineServicePublisher is not configured\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO ipc.Server: Starting Socket Reader #1 for port 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpRequestLog: Http request log for http.requests.datanode is not defined\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Jetty bound to port 45629\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO mortbay.log: jetty-6.1.26-emr\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context cluster\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context static\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context logs\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: adding path spec: /cluster/*\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO http.HttpServer2: adding path spec: /ws/*\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ContainerManagementProtocolPB to the server\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO ipc.Server: IPC Server Responder: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO ipc.Server: IPC Server listener on 0: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:34 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO security.NMContainerTokenSecretManager: Updating node address : algo-1:44075\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 500 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO ipc.Server: Starting Socket Reader #1 for port 8040\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB to the server\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO ipc.Server: IPC Server Responder: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO ipc.Server: IPC Server listener on 8040: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO localizer.ResourceLocalizationService: Localizer started on port 8040\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO containermanager.ContainerManagerImpl: ContainerManager started at /10.0.68.168:44075\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO containermanager.ContainerManagerImpl: ContainerManager bound to algo-1/10.0.68.168:0\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO webapp.WebServer: Instantiating NMWebApp at algo-1:8042\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO webapp.WebApps: Registered webapp guice modules\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45629\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: Jetty bound to port 8088\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO mortbay.log: jetty-6.1.26-emr\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSEditLog: Edit logging is async:true\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: KeyProvider: null\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: fsLock is fair: true\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: supergroup = supergroup\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: isPermissionEnabled = true\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: HA Enabled: false\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpRequestLog: Http request log for http.requests.nodemanager is not defined\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context node\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: Added filter authentication (class=org.apache.hadoop.security.AuthenticationWithProxyUserFilter) to context node\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: Added filter authentication (class=org.apache.hadoop.security.AuthenticationWithProxyUserFilter) to context logs\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: Added filter authentication (class=org.apache.hadoop.security.AuthenticationWithProxyUserFilter) to context static\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: adding path spec: /node/*\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO http.HttpServer2: adding path spec: /ws/*\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Dec 18 19:09:35\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: Computing capacity for map BlocksMap\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: VM type = 64-bit\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: capacity = 2^21 = 2097152 entries\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:50075\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.JvmPauseMonitor: Starting JVM pause monitor\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO datanode.DataNode: dnUserName = root\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO datanode.DataNode: supergroup = supergroup\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: defaultReplication = 3\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: maxReplication = 512\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: minReplication = 1\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: maxReplicationStreams = 2\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: encryptDataTransfer = false\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: Append Enabled: true\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO mortbay.log: Extract jar:file:/usr/lib/hadoop/hadoop-yarn-common-2.10.0-amzn-0.jar!/webapps/cluster to work/Jetty_10_0_68_168_8088_cluster____qx6lwx/webapp\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: Computing capacity for map INodeMap\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: VM type = 64-bit\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: capacity = 2^20 = 1048576 entries\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSDirectory: ACLs enabled? false\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSDirectory: XAttrs enabled? true\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.NameNode: Caching file names occurring more than 10 times\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: Computing capacity for map cachedBlocks\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: VM type = 64-bit\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: capacity = 2^18 = 262144 entries\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: Retry cache on namenode is enabled\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: Computing capacity for map NameNodeRetryCache\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: VM type = 64-bit\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO util.GSet: capacity = 2^15 = 32768 entries\u001B[0m\n", "\u001B[34m20/12/18 19:09:35 INFO ipc.Server: Starting Socket Reader #1 for port 50020\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO common.Storage: Lock on /opt/amazon/hadoop/hdfs/namenode/in_use.lock acquired by nodename 96@algo-1\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FileJournalManager: Recovering unfinalized segments in /opt/amazon/hadoop/hdfs/namenode/current\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FSImage: No edit log streams selected.\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FSImage: Planning to load image: FSImageFile(file=/opt/amazon/hadoop/hdfs/namenode/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FSImageFormatPBINode: Loading 1 INodes.\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO datanode.DataNode: Opened IPC server at /0.0.0.0:50020\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FSImageFormatPBINode: Successfully loaded 1 inodes\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO datanode.DataNode: Refresh request received for nameservices: null\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FSImage: Loaded image for txid 0 from /opt/amazon/hadoop/hdfs/namenode/current/fsimage_0000000000000000000\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO datanode.DataNode: Starting BPOfferServices for nameservices: \u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FSEditLog: Starting log segment at 1\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO datanode.DataNode: Block pool (Datanode Uuid unassigned) service to algo-1/10.0.68.168:8020 starting to offer service\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO ipc.Server: IPC Server Responder: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO ipc.Server: IPC Server listener on 50020: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.NameCache: initialized with 0 entries 0 lookups\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.FSNamesystem: Finished loading FSImage in 489 msecs\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO webapp.WebApps: Registered webapp guice modules\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO http.HttpServer2: Jetty bound to port 8042\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO mortbay.log: jetty-6.1.26-emr\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:36 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register\u001B[0m\n", "\u001B[34mINFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver as a provider class\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:36 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register\u001B[0m\n", "\u001B[34mINFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as a root resource class\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:36 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register\u001B[0m\n", "\u001B[34mINFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:36 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate\u001B[0m\n", "\u001B[34mINFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO mortbay.log: Extract jar:file:/usr/lib/hadoop/hadoop-yarn-common-2.10.0-amzn-0.jar!/webapps/node to work/Jetty_algo.1_8042_node____.afclh/webapp\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.NameNode: RPC server is binding to algo-1:8020\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO namenode.NameNode: Enable NameNode state context:false\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:36 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider\u001B[0m\n", "\u001B[34mINFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope \"Singleton\"\u001B[0m\n", "\u001B[34m20/12/18 19:09:36 INFO ipc.Server: Starting Socket Reader #1 for port 8020\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO namenode.NameNode: Clients are to use algo-1:8020 to access this namenode/service.\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO namenode.LeaseManager: Number of blocks under construction: 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO blockmanagement.BlockManager: initializing replication queues\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO blockmanagement.BlockManager: Total number of blocks = 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO blockmanagement.BlockManager: Number of invalid blocks = 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO blockmanagement.BlockManager: Number of over-replicated blocks = 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO blockmanagement.BlockManager: Number of blocks being written = 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 20 msec\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO ipc.Server: IPC Server listener on 8020: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO ipc.Server: IPC Server Responder: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO namenode.NameNode: NameNode RPC up at: algo-1/10.0.68.168:8020\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO namenode.FSNamesystem: Starting services required for active state\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO namenode.FSDirectory: Initializing quota with 4 thread(s)\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO namenode.FSDirectory: Quota initialization completed in 21 milliseconds\u001B[0m\n", "\u001B[34mname space=1\u001B[0m\n", "\u001B[34mstorage space=0\u001B[0m\n", "\u001B[34mstorage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:37 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register\u001B[0m\n", "\u001B[34mINFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:37 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register\u001B[0m\n", "\u001B[34mINFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:37 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register\u001B[0m\n", "\u001B[34mINFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:37 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate\u001B[0m\n", "\u001B[34mINFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:37 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider\u001B[0m\n", "\u001B[34mINFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope \"Singleton\"\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:37 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider\u001B[0m\n", "\u001B[34mINFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope \"Singleton\"\u001B[0m\n", "\u001B[34m20/12/18 19:09:37 INFO ipc.Client: Retrying connect to server: algo-1/10.0.68.168:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:38 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider\u001B[0m\n", "\u001B[34mINFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope \"Singleton\"\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO datanode.DataNode: Acknowledging ACTIVE Namenode during handshakeBlock pool (Datanode Uuid unassigned) service to algo-1/10.0.68.168:8020\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO common.Storage: Lock on /opt/amazon/hadoop/hdfs/datanode/in_use.lock acquired by nodename 97@algo-1\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO common.Storage: Storage directory /opt/amazon/hadoop/hdfs/datanode is not formatted for namespace 1269051150. Formatting...\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO common.Storage: Generated new storageID DS-2746dc6c-e557-448f-afb2-f0c0bbae037b for directory /opt/amazon/hadoop/hdfs/datanode\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO common.Storage: Analyzing storage directories for bpid BP-1734687384-10.0.68.168-1608318570285\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO common.Storage: Locking is disabled for /opt/amazon/hadoop/hdfs/datanode/current/BP-1734687384-10.0.68.168-1608318570285\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO common.Storage: Block pool storage directory /opt/amazon/hadoop/hdfs/datanode/current/BP-1734687384-10.0.68.168-1608318570285 is not formatted for BP-1734687384-10.0.68.168-1608318570285. Formatting ...\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO common.Storage: Formatting block pool BP-1734687384-10.0.68.168-1608318570285 directory /opt/amazon/hadoop/hdfs/datanode/current/BP-1734687384-10.0.68.168-1608318570285/current\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO datanode.DataNode: Setting up storage: nsid=1269051150;bpid=BP-1734687384-10.0.68.168-1608318570285;lv=-57;nsInfo=lv=-63;cid=CID-e5ed05aa-d91e-4ee1-bc45-a4102c300e93;nsid=1269051150;c=1608318570285;bpid=BP-1734687384-10.0.68.168-1608318570285;dnuuid=null\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO datanode.DataNode: Generated and persisted new Datanode UUID 69471637-357e-45f6-ac4b-214c9a805580\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:38 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider\u001B[0m\n", "\u001B[34mINFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope \"Singleton\"\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Added new volume: DS-2746dc6c-e557-448f-afb2-f0c0bbae037b\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Added volume - /opt/amazon/hadoop/hdfs/datanode/current, StorageType: DISK\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@10.0.68.168:8088\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO webapp.WebApps: Web app cluster started at 8088\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO checker.ThrottledAsyncChecker: Scheduling a check for /opt/amazon/hadoop/hdfs/datanode/current\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO checker.DatasetVolumeChecker: Scheduled health check for volume /opt/amazon/hadoop/hdfs/datanode/current\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Adding block pool BP-1734687384-10.0.68.168-1608318570285\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Scanning block pool BP-1734687384-10.0.68.168-1608318570285 on volume /opt/amazon/hadoop/hdfs/datanode/current...\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-1734687384-10.0.68.168-1608318570285 on /opt/amazon/hadoop/hdfs/datanode/current: 106ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1734687384-10.0.68.168-1608318570285: 126ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO ipc.Server: Starting Socket Reader #1 for port 8033\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-1734687384-10.0.68.168-1608318570285 on volume /opt/amazon/hadoop/hdfs/datanode/current...\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.BlockPoolSlice: Replica Cache file: /opt/amazon/hadoop/hdfs/datanode/current/BP-1734687384-10.0.68.168-1608318570285/current/replicas doesn't exist \u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1734687384-10.0.68.168-1608318570285 on volume /opt/amazon/hadoop/hdfs/datanode/current: 54ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO impl.FsDatasetImpl: Total time to add all replicas to map for block pool BP-1734687384-10.0.68.168-1608318570285: 63ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO datanode.VolumeScanner: Now scanning bpid BP-1734687384-10.0.68.168-1608318570285 on volume /opt/amazon/hadoop/hdfs/datanode\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO datanode.VolumeScanner: VolumeScanner(/opt/amazon/hadoop/hdfs/datanode, DS-2746dc6c-e557-448f-afb2-f0c0bbae037b): finished scanning block pool BP-1734687384-10.0.68.168-1608318570285\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 12/19/20 12:26 AM with interval of 21600000ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO datanode.DataNode: Block pool BP-1734687384-10.0.68.168-1608318570285 (Datanode Uuid 69471637-357e-45f6-ac4b-214c9a805580) service to algo-1/10.0.68.168:8020 beginning handshake with NN\u001B[0m\n", "\u001B[34m20/12/18 19:09:38 INFO datanode.VolumeScanner: VolumeScanner(/opt/amazon/hadoop/hdfs/datanode, DS-2746dc6c-e557-448f-afb2-f0c0bbae037b): no suitable block pools found to scan. Waiting 1814399888 ms.\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(10.0.68.168:50010, datanodeUuid=69471637-357e-45f6-ac4b-214c9a805580, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-e5ed05aa-d91e-4ee1-bc45-a4102c300e93;nsid=1269051150;c=1608318570285) storage 69471637-357e-45f6-ac4b-214c9a805580\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO net.NetworkTopology: Adding a new node: /default-rack/10.0.68.168:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO blockmanagement.BlockReportLeaseManager: Registered DN 69471637-357e-45f6-ac4b-214c9a805580 (10.0.68.168:50010).\u001B[0m\n", "\u001B[34mDec 18, 2020 7:09:39 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider\u001B[0m\n", "\u001B[34mINFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope \"Singleton\"\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO datanode.DataNode: Block pool Block pool BP-1734687384-10.0.68.168-1608318570285 (Datanode Uuid 69471637-357e-45f6-ac4b-214c9a805580) service to algo-1/10.0.68.168:8020 successfully registered with NN\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO datanode.DataNode: For namenode algo-1/10.0.68.168:8020 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@algo-1:8042\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO webapp.WebApps: Web app node started at 8042\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-2746dc6c-e557-448f-afb2-f0c0bbae037b for DN 10.0.68.168:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO nodemanager.NodeStatusUpdaterImpl: Node ID assigned is : algo-1:44075\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO util.JvmPauseMonitor: Starting JVM pause monitor\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO client.RMProxy: Connecting to ResourceManager at /10.0.68.168:8031\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.Server: IPC Server listener on 8033: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.Server: IPC Server Responder: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO resourcemanager.ResourceManager: Transitioning to active state\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO recovery.RMStateStore: Updating AMRMToken\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO security.RMContainerTokenSecretManager: Rolling master-key for container-tokens\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO security.RMDelegationTokenSecretManager: storing master key with keyID 1\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO recovery.RMStateStore: Storing RMDTMasterKey.\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO security.RMDelegationTokenSecretManager: storing master key with keyID 2\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO recovery.RMStateStore: Storing RMDTMasterKey.\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM container statuses: []\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO BlockStateChange: BLOCK* processReport 0xf0bb2149978cb4f1: Processing first storage report for DS-2746dc6c-e557-448f-afb2-f0c0bbae037b from datanode 69471637-357e-45f6-ac4b-214c9a805580\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO BlockStateChange: BLOCK* processReport 0xf0bb2149978cb4f1: from storage DS-2746dc6c-e557-448f-afb2-f0c0bbae037b node DatanodeRegistration(10.0.68.168:50010, datanodeUuid=69471637-357e-45f6-ac4b-214c9a805580, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-e5ed05aa-d91e-4ee1-bc45-a4102c300e93;nsid=1269051150;c=1608318570285), blocks: 0, hasStaleStorage: false, processing time: 6 msecs, invalidatedBlocks: 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO nodemanager.NodeStatusUpdaterImpl: Registering with RM using containers :[]\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.nodelabels.event.NodeLabelsStoreEventType for class org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager$ForwardingEventHandler\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 5000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.Server: Starting Socket Reader #1 for port 8031\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceTrackerPB to the server\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO datanode.DataNode: Successfully sent block report 0xf0bb2149978cb4f1, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 24 msec to generate and 188 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO datanode.DataNode: Got finalize command for block pool BP-1734687384-10.0.68.168-1608318570285\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.Server: IPC Server Responder: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.Server: IPC Server listener on 8031: starting\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO cluster is up\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO transitioning from status BOOTSTRAPPING to WAITING\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO starting executor logs watcher\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO start log event log publisher\u001B[0m\n", "\u001B[34mStarting executor logs watcher on log_dir: /var/log/yarn\u001B[0m\n", "\u001B[34m12-18 19:09 sagemaker-spark-event-logs-publisher INFO Spark event log not enabled.\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO Waiting for hosts to bootstrap: ['algo-1']\u001B[0m\n", "\u001B[34m12-18 19:09 smspark-submit INFO Received host statuses: dict_items([('algo-1', StatusMessage(status='WAITING', timestamp='2020-12-18T19:09:39.738423'))])\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO util.JvmPauseMonitor: Starting JVM pause monitor\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 5000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.Server: Starting Socket Reader #1 for port 8030\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.Server: IPC Server Responder: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:39 INFO ipc.Server: IPC Server listener on 8030: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 5000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO ipc.Server: Starting Socket Reader #1 for port 8032\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationClientProtocolPB to the server\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO ipc.Server: IPC Server Responder: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO ipc.Server: IPC Server listener on 8032: starting\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO resourcemanager.ResourceTrackerService: NodeManager from node algo-1(cmPort: 44075 httpPort: 8042) registered with capability: , assigned nodeId algo-1:44075\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO rmnode.RMNodeImpl: algo-1:44075 Node Transitioned from NEW to RUNNING\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO resourcemanager.ResourceManager: Transitioned to active state\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id -1415079653\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id -1457059190\u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as algo-1:44075 with total resource of \u001B[0m\n", "\u001B[34m20/12/18 19:09:40 INFO capacity.CapacityScheduler: Added node algo-1:44075 clusterResource: \u001B[0m\n", "\u001B[34m20/12/18 19:09:42 INFO spark.SparkContext: Running Spark version 2.4.6-amzn-0\u001B[0m\n", "\u001B[34m20/12/18 19:09:42 INFO spark.SparkContext: Submitted application: PySparkApp\u001B[0m\n", "\u001B[34m20/12/18 19:09:42 INFO spark.SecurityManager: Changing view acls to: root\u001B[0m\n", "\u001B[34m20/12/18 19:09:42 INFO spark.SecurityManager: Changing modify acls to: root\u001B[0m\n", "\u001B[34m20/12/18 19:09:42 INFO spark.SecurityManager: Changing view acls groups to: \u001B[0m\n", "\u001B[34m20/12/18 19:09:42 INFO spark.SecurityManager: Changing modify acls groups to: \u001B[0m\n", "\u001B[34m20/12/18 19:09:42 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO util.Utils: Successfully started service 'sparkDriver' on port 35617.\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO spark.SparkEnv: Registering MapOutputTracker\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO spark.SparkEnv: Registering BlockManagerMaster\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-c91acdce-dba0-4824-83f1-ed4b5a716f03\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO memory.MemoryStore: MemoryStore started with capacity 1028.8 MB\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO spark.SparkEnv: Registering OutputCommitCoordinator\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO util.log: Logging initialized @3449ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO server.Server: jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO server.Server: Started @3572ms\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO server.AbstractConnector: Started ServerConnector@611e2320{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2cdd7488{/jobs,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a8d16d1{/jobs/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2189b987{/jobs/job,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@691f7022{/jobs/job/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7bb6d84e{/stages,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@179f033e{/stages/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@284ecee4{/stages/stage,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@18929523{/stages/stage/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@46f5513b{/stages/pool,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6db1eb07{/stages/pool/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@45129b46{/storage,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4925254a{/storage/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2535d9dc{/storage/rdd,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@473ee7ae{/storage/rdd/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@16279ca4{/environment,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7648e45f{/environment/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7d382873{/executors,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d0ba3ed{/executors/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@67e3c454{/executors/threadDump,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@56b8aeeb{/executors/threadDump/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6aff7fff{/static,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@67affd9d{/,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@34e7a3c9{/api,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@77d247c5{/jobs/job/kill,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@10e64cea{/stages/stage/kill,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:09:43 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.0.68.168:4040\u001B[0m\n", "\u001B[34m20/12/18 19:09:44 INFO client.RMProxy: Connecting to ResourceManager at /10.0.68.168:8032\u001B[0m\n", "\u001B[34m20/12/18 19:09:44 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers\u001B[0m\n", "\u001B[34m20/12/18 19:09:44 INFO resourcemanager.ClientRMService: Allocated new applicationId: 1\u001B[0m\n", "\u001B[34m20/12/18 19:09:44 INFO conf.Configuration: resource-types.xml not found\u001B[0m\n", "\u001B[34m20/12/18 19:09:44 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.\u001B[0m\n", "\u001B[34m20/12/18 19:09:45 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:45 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:09:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (7449 MB per container)\u001B[0m\n", "\u001B[34m20/12/18 19:09:45 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead\u001B[0m\n", "\u001B[34m20/12/18 19:09:45 INFO yarn.Client: Setting up container launch context for our AM\u001B[0m\n", "\u001B[34m20/12/18 19:09:45 INFO yarn.Client: Setting up the launch environment for our AM container\u001B[0m\n", "\u001B[34m20/12/18 19:09:45 INFO yarn.Client: Preparing resources for our AM container\u001B[0m\n", "\u001B[34m20/12/18 19:09:45 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.\u001B[0m\n", "\u001B[34m20/12/18 19:09:49 INFO yarn.Client: Uploading resource file:/tmp/spark-34ee3167-47a5-40d6-b567-33245a1ea730/__spark_libs__6740149115475763856.zip -> hdfs://10.0.68.168/user/root/.sparkStaging/application_1608318579307_0001/__spark_libs__6740149115475763856.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:52 INFO hdfs.StateChange: BLOCK* allocate blk_1073741825_1001, replicas=10.0.68.168:50010 for /user/root/.sparkStaging/application_1608318579307_0001/__spark_libs__6740149115475763856.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:53 INFO datanode.DataNode: Receiving BP-1734687384-10.0.68.168-1608318570285:blk_1073741825_1001 src: /10.0.68.168:35458 dest: /10.0.68.168:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:54 INFO DataNode.clienttrace: src: /10.0.68.168:35458, dest: /10.0.68.168:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-605213754_18, offset: 0, srvID: 69471637-357e-45f6-ac4b-214c9a805580, blockid: BP-1734687384-10.0.68.168-1608318570285:blk_1073741825_1001, duration(ns): 572898557\u001B[0m\n", "\u001B[34m20/12/18 19:09:54 INFO datanode.DataNode: PacketResponder: BP-1734687384-10.0.68.168-1608318570285:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating\u001B[0m\n", "\u001B[34m20/12/18 19:09:54 INFO hdfs.StateChange: BLOCK* allocate blk_1073741826_1002, replicas=10.0.68.168:50010 for /user/root/.sparkStaging/application_1608318579307_0001/__spark_libs__6740149115475763856.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:54 INFO datanode.DataNode: Receiving BP-1734687384-10.0.68.168-1608318570285:blk_1073741826_1002 src: /10.0.68.168:35460 dest: /10.0.68.168:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:54 INFO DataNode.clienttrace: src: /10.0.68.168:35460, dest: /10.0.68.168:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-605213754_18, offset: 0, srvID: 69471637-357e-45f6-ac4b-214c9a805580, blockid: BP-1734687384-10.0.68.168-1608318570285:blk_1073741826_1002, duration(ns): 410909340\u001B[0m\n", "\u001B[34m20/12/18 19:09:54 INFO datanode.DataNode: PacketResponder: BP-1734687384-10.0.68.168-1608318570285:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating\u001B[0m\n", "\u001B[34m20/12/18 19:09:54 INFO hdfs.StateChange: BLOCK* allocate blk_1073741827_1003, replicas=10.0.68.168:50010 for /user/root/.sparkStaging/application_1608318579307_0001/__spark_libs__6740149115475763856.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:54 INFO datanode.DataNode: Receiving BP-1734687384-10.0.68.168-1608318570285:blk_1073741827_1003 src: /10.0.68.168:35462 dest: /10.0.68.168:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO DataNode.clienttrace: src: /10.0.68.168:35462, dest: /10.0.68.168:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-605213754_18, offset: 0, srvID: 69471637-357e-45f6-ac4b-214c9a805580, blockid: BP-1734687384-10.0.68.168-1608318570285:blk_1073741827_1003, duration(ns): 335466740\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO datanode.DataNode: PacketResponder: BP-1734687384-10.0.68.168-1608318570285:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO hdfs.StateChange: BLOCK* allocate blk_1073741828_1004, replicas=10.0.68.168:50010 for /user/root/.sparkStaging/application_1608318579307_0001/__spark_libs__6740149115475763856.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO datanode.DataNode: Receiving BP-1734687384-10.0.68.168-1608318570285:blk_1073741828_1004 src: /10.0.68.168:35464 dest: /10.0.68.168:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO DataNode.clienttrace: src: /10.0.68.168:35464, dest: /10.0.68.168:50010, bytes: 13532432, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-605213754_18, offset: 0, srvID: 69471637-357e-45f6-ac4b-214c9a805580, blockid: BP-1734687384-10.0.68.168-1608318570285:blk_1073741828_1004, duration(ns): 29487097\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO datanode.DataNode: PacketResponder: BP-1734687384-10.0.68.168-1608318570285:blk_1073741828_1004, type=LAST_IN_PIPELINE terminating\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO hdfs.StateChange: DIR* completeFile: /user/root/.sparkStaging/application_1608318579307_0001/__spark_libs__6740149115475763856.zip is closed by DFSClient_NONMAPREDUCE_-605213754_18\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO yarn.Client: Uploading resource file:/usr/lib/spark/python/lib/pyspark.zip -> hdfs://10.0.68.168/user/root/.sparkStaging/application_1608318579307_0001/pyspark.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO hdfs.StateChange: BLOCK* allocate blk_1073741829_1005, replicas=10.0.68.168:50010 for /user/root/.sparkStaging/application_1608318579307_0001/pyspark.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO datanode.DataNode: Receiving BP-1734687384-10.0.68.168-1608318570285:blk_1073741829_1005 src: /10.0.68.168:35468 dest: /10.0.68.168:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO DataNode.clienttrace: src: /10.0.68.168:35468, dest: /10.0.68.168:50010, bytes: 596339, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-605213754_18, offset: 0, srvID: 69471637-357e-45f6-ac4b-214c9a805580, blockid: BP-1734687384-10.0.68.168-1608318570285:blk_1073741829_1005, duration(ns): 2419047\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO datanode.DataNode: PacketResponder: BP-1734687384-10.0.68.168-1608318570285:blk_1073741829_1005, type=LAST_IN_PIPELINE terminating\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO namenode.FSNamesystem: BLOCK* blk_1073741829_1005 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/root/.sparkStaging/application_1608318579307_0001/pyspark.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO hdfs.StateChange: DIR* completeFile: /user/root/.sparkStaging/application_1608318579307_0001/pyspark.zip is closed by DFSClient_NONMAPREDUCE_-605213754_18\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO yarn.Client: Uploading resource file:/usr/lib/spark/python/lib/py4j-0.10.7-src.zip -> hdfs://10.0.68.168/user/root/.sparkStaging/application_1608318579307_0001/py4j-0.10.7-src.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO hdfs.StateChange: BLOCK* allocate blk_1073741830_1006, replicas=10.0.68.168:50010 for /user/root/.sparkStaging/application_1608318579307_0001/py4j-0.10.7-src.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO datanode.DataNode: Receiving BP-1734687384-10.0.68.168-1608318570285:blk_1073741830_1006 src: /10.0.68.168:35470 dest: /10.0.68.168:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO DataNode.clienttrace: src: /10.0.68.168:35470, dest: /10.0.68.168:50010, bytes: 42437, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-605213754_18, offset: 0, srvID: 69471637-357e-45f6-ac4b-214c9a805580, blockid: BP-1734687384-10.0.68.168-1608318570285:blk_1073741830_1006, duration(ns): 1192932\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO datanode.DataNode: PacketResponder: BP-1734687384-10.0.68.168-1608318570285:blk_1073741830_1006, type=LAST_IN_PIPELINE terminating\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO hdfs.StateChange: DIR* completeFile: /user/root/.sparkStaging/application_1608318579307_0001/py4j-0.10.7-src.zip is closed by DFSClient_NONMAPREDUCE_-605213754_18\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO yarn.Client: Uploading resource file:/tmp/spark-34ee3167-47a5-40d6-b567-33245a1ea730/__spark_conf__363147793911745605.zip -> hdfs://10.0.68.168/user/root/.sparkStaging/application_1608318579307_0001/__spark_conf__.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO hdfs.StateChange: BLOCK* allocate blk_1073741831_1007, replicas=10.0.68.168:50010 for /user/root/.sparkStaging/application_1608318579307_0001/__spark_conf__.zip\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO datanode.DataNode: Receiving BP-1734687384-10.0.68.168-1608318570285:blk_1073741831_1007 src: /10.0.68.168:35472 dest: /10.0.68.168:50010\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO DataNode.clienttrace: src: /10.0.68.168:35472, dest: /10.0.68.168:50010, bytes: 245104, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-605213754_18, offset: 0, srvID: 69471637-357e-45f6-ac4b-214c9a805580, blockid: BP-1734687384-10.0.68.168-1608318570285:blk_1073741831_1007, duration(ns): 1548496\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO datanode.DataNode: PacketResponder: BP-1734687384-10.0.68.168-1608318570285:blk_1073741831_1007, type=LAST_IN_PIPELINE terminating\u001B[0m\n", "\u001B[34m20/12/18 19:09:55 INFO hdfs.StateChange: DIR* completeFile: /user/root/.sparkStaging/application_1608318579307_0001/__spark_conf__.zip is closed by DFSClient_NONMAPREDUCE_-605213754_18\u001B[0m\n", "\u001B[34m20/12/18 19:09:56 INFO spark.SecurityManager: Changing view acls to: root\u001B[0m\n", "\u001B[34m20/12/18 19:09:56 INFO spark.SecurityManager: Changing modify acls to: root\u001B[0m\n", "\u001B[34m20/12/18 19:09:56 INFO spark.SecurityManager: Changing view acls groups to: \u001B[0m\n", "\u001B[34m20/12/18 19:09:56 INFO spark.SecurityManager: Changing modify acls groups to: \u001B[0m\n", "\u001B[34m20/12/18 19:09:56 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO yarn.Client: Submitting application application_1608318579307_0001 to ResourceManager\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO capacity.CapacityScheduler: Application 'application_1608318579307_0001' is submitted without priority hence considering default queue/cluster priority: 0\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO capacity.CapacityScheduler: Priority '0' is acceptable in queue : default for application: application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 WARN rmapp.RMAppImpl: The specific max attempts: 0 for application: 1 is invalid, because it is out of the range [1, 1]. Use the global max attempts instead.\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO resourcemanager.ClientRMService: Application with id 1 submitted by user root\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO rmapp.RMAppImpl: Storing application with id application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO resourcemanager.RMAuditLogger: USER=root#011IP=10.0.68.168#011OPERATION=Submit Application Request#011TARGET=ClientRMService#011RESULT=SUCCESS#011APPID=application_1608318579307_0001#011QUEUENAME=default\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO recovery.RMStateStore: Storing info for app: application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO rmapp.RMAppImpl: application_1608318579307_0001 State change from NEW to NEW_SAVING on event = START\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO rmapp.RMAppImpl: application_1608318579307_0001 State change from NEW_SAVING to SUBMITTED on event = APP_NEW_SAVED\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO capacity.ParentQueue: Application added - appId: application_1608318579307_0001 user: root leaf-queue of parent: root #applications: 1\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO capacity.CapacityScheduler: Accepted application application_1608318579307_0001 from user: root, in queue: default\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO rmapp.RMAppImpl: application_1608318579307_0001 State change from SUBMITTED to ACCEPTED on event = APP_ACCEPTED\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO attempt.RMAppAttemptImpl: appattempt_1608318579307_0001_000001 State change from NEW to SUBMITTED on event = START\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 WARN capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 WARN capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO capacity.LeafQueue: Application application_1608318579307_0001 from user: root activated in queue: default\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO capacity.LeafQueue: Application added - appId: application_1608318579307_0001 user: root, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO capacity.CapacityScheduler: Added Application Attempt appattempt_1608318579307_0001_000001 to scheduler from user root in queue default\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO attempt.RMAppAttemptImpl: appattempt_1608318579307_0001_000001 State change from SUBMITTED to SCHEDULED on event = ATTEMPT_ADDED\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO impl.YarnClientImpl: Submitted application application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:09:57 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1608318579307_0001 and attemptId None\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1608318579307_0001_000001 container=null queue=default clusterResource= type=OFF_SWITCH requestedPartition=\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used= cluster=\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO rmcontainer.RMContainerImpl: container_1608318579307_0001_01_000001 Container Transitioned from NEW to ALLOCATED\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO resourcemanager.RMAuditLogger: USER=root#011OPERATION=AM Allocated Container#011TARGET=SchedulerApp#011RESULT=SUCCESS#011APPID=application_1608318579307_0001#011CONTAINERID=container_1608318579307_0001_01_000001#011RESOURCE=#011QUEUENAME=default\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : algo-1:44075 for container : container_1608318579307_0001_01_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO rmcontainer.RMContainerImpl: container_1608318579307_0001_01_000001 Container Transitioned from ALLOCATED to ACQUIRED\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO security.NMTokenSecretManagerInRM: Clear node set for appattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.1202846 absoluteUsedCapacity=0.1202846 used= cluster=\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO capacity.CapacityScheduler: Allocation proposal accepted\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1608318579307_0001 AttemptId: appattempt_1608318579307_0001_000001 MasterContainer: Container: [ContainerId: container_1608318579307_0001_01_000001, AllocationRequestId: 0, Version: 0, NodeId: algo-1:44075, NodeHttpAddress: algo-1:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 10.0.68.168:44075 }, ExecutionType: GUARANTEED, ]\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO attempt.RMAppAttemptImpl: appattempt_1608318579307_0001_000001 State change from SCHEDULED to ALLOCATED_SAVING on event = CONTAINER_ALLOCATED\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO attempt.RMAppAttemptImpl: appattempt_1608318579307_0001_000001 State change from ALLOCATED_SAVING to ALLOCATED on event = ATTEMPT_NEW_SAVED\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO amlauncher.AMLauncher: Launching masterappattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1608318579307_0001_01_000001, AllocationRequestId: 0, Version: 0, NodeId: algo-1:44075, NodeHttpAddress: algo-1:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 10.0.68.168:44075 }, ExecutionType: GUARANTEED, ] for AM appattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO security.AMRMTokenSecretManager: Creating password for appattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO ipc.Server: Auth successful for appattempt_1608318579307_0001_000001 (auth:SIMPLE)\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO yarn.Client: \u001B[0m\n", "\u001B[34m#011 client token: N/A\u001B[0m\n", "\u001B[34m#011 diagnostics: [Fri Dec 18 19:09:58 +0000 2020] Scheduler has assigned a container for AM, waiting for AM container to be launched\u001B[0m\n", "\u001B[34m#011 ApplicationMaster host: N/A\u001B[0m\n", "\u001B[34m#011 ApplicationMaster RPC port: -1\u001B[0m\n", "\u001B[34m#011 queue: default\u001B[0m\n", "\u001B[34m#011 start time: 1608318597512\u001B[0m\n", "\u001B[34m#011 final status: UNDEFINED\u001B[0m\n", "\u001B[34m#011 tracking URL: http://algo-1:8088/proxy/application_1608318579307_0001/\u001B[0m\n", "\u001B[34m#011 user: root\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO containermanager.ContainerManagerImpl: Start request for container_1608318579307_0001_01_000001 by user root\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO containermanager.ContainerManagerImpl: Creating a new application reference for app application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO application.ApplicationImpl: Application application_1608318579307_0001 transitioned from NEW to INITING\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO nodemanager.NMAuditLogger: USER=root#011IP=10.0.68.168#011OPERATION=Start Container Request#011TARGET=ContainerManageImpl#011RESULT=SUCCESS#011APPID=application_1608318579307_0001#011CONTAINERID=container_1608318579307_0001_01_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO application.ApplicationImpl: Adding container_1608318579307_0001_01_000001 to application application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO application.ApplicationImpl: Application application_1608318579307_0001 transitioned from INITING to RUNNING\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO container.ContainerImpl: Container container_1608318579307_0001_01_000001 transitioned from NEW to LOCALIZING\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1608318579307_0001_01_000001, AllocationRequestId: 0, Version: 0, NodeId: algo-1:44075, NodeHttpAddress: algo-1:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 10.0.68.168:44075 }, ExecutionType: GUARANTEED, ] for AM appattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO attempt.RMAppAttemptImpl: appattempt_1608318579307_0001_000001 State change from ALLOCATED to LAUNCHED on event = LAUNCHED\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO rmapp.RMAppImpl: update the launch time for applicationId: application_1608318579307_0001, attemptId: appattempt_1608318579307_0001_000001launchTime: 1608318598903\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO recovery.RMStateStore: Updating info for app: application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO localizer.ResourceLocalizationService: Created localizer for container_1608318579307_0001_01_000001\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /tmp/hadoop-root/nm-local-dir/nmPrivate/container_1608318579307_0001_01_000001.tokens\u001B[0m\n", "\u001B[34m20/12/18 19:09:58 INFO nodemanager.DefaultContainerExecutor: Initializing user root\u001B[0m\n", "\u001B[34m20/12/18 19:09:59 INFO nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-root/nm-local-dir/nmPrivate/container_1608318579307_0001_01_000001.tokens to /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1608318579307_0001/container_1608318579307_0001_01_000001.tokens\u001B[0m\n", "\u001B[34m20/12/18 19:09:59 INFO nodemanager.DefaultContainerExecutor: Localizer CWD set to /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1608318579307_0001 = file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:09:59 INFO localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded.\u001B[0m\n", "\u001B[34m20/12/18 19:09:59 INFO rmcontainer.RMContainerImpl: container_1608318579307_0001_01_000001 Container Transitioned from ACQUIRED to RUNNING\u001B[0m\n", "\u001B[34m20/12/18 19:09:59 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:10:00 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:10:06 INFO util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2446ms\u001B[0m\n", "\u001B[34mNo GCs detected\u001B[0m\n", "\u001B[34m20/12/18 19:10:06 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:10:07 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:10:08 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:10:09 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:10:09 INFO container.ContainerImpl: Container container_1608318579307_0001_01_000001 transitioned from LOCALIZING to SCHEDULED\u001B[0m\n", "\u001B[34m20/12/18 19:10:09 INFO scheduler.ContainerScheduler: Starting container [container_1608318579307_0001_01_000001]\u001B[0m\n", "\u001B[34m20/12/18 19:10:09 INFO container.ContainerImpl: Container container_1608318579307_0001_01_000001 transitioned from SCHEDULED to RUNNING\u001B[0m\n", "\u001B[34m20/12/18 19:10:09 INFO monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1608318579307_0001_01_000001\u001B[0m\n", "\u001B[34m20/12/18 19:10:09 INFO nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1608318579307_0001/container_1608318579307_0001_01_000001/default_container_executor.sh]\u001B[0m\n", "\u001B[34mHandling create event for file: /var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/prelaunch.out\u001B[0m\n", "\u001B[34mHandling create event for file: /var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/prelaunch.err\u001B[0m\n", "\u001B[34mHandling create event for file: /var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stdout\u001B[0m\n", "\u001B[34mHandling create event for file: /var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr\u001B[0m\n", "\u001B[34m20/12/18 19:10:10 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:10:11 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:10:11 INFO monitor.ContainersMonitorImpl: container_1608318579307_0001_01_000001's ip = 10.0.68.168, and hostname = algo-1\u001B[0m\n", "\u001B[34m20/12/18 19:10:11 INFO monitor.ContainersMonitorImpl: Skipping monitoring container container_1608318579307_0001_01_000001 since CPU usage is not yet available.\u001B[0m\n", "\u001B[34m20/12/18 19:10:12 INFO yarn.Client: Application report for application_1608318579307_0001 (state: ACCEPTED)\u001B[0m\n", "\u001B[34m20/12/18 19:10:12 INFO ipc.Server: Auth successful for appattempt_1608318579307_0001_000001 (auth:SIMPLE)\u001B[0m\n", "\u001B[34m20/12/18 19:10:12 INFO resourcemanager.DefaultAMSProcessor: AM registration appattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m20/12/18 19:10:12 INFO resourcemanager.RMAuditLogger: USER=root#011IP=10.0.68.168#011OPERATION=Register App Master#011TARGET=ApplicationMasterService#011RESULT=SUCCESS#011APPID=application_1608318579307_0001#011APPATTEMPTID=appattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m20/12/18 19:10:12 INFO attempt.RMAppAttemptImpl: appattempt_1608318579307_0001_000001 State change from LAUNCHED to RUNNING on event = REGISTERED\u001B[0m\n", "\u001B[34m20/12/18 19:10:12 INFO rmapp.RMAppImpl: application_1608318579307_0001 State change from ACCEPTED to RUNNING on event = ATTEMPT_REGISTERED\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> algo-1, PROXY_URI_BASES -> http://algo-1:8088/proxy/application_1608318579307_0001), /proxy/application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO yarn.Client: Application report for application_1608318579307_0001 (state: RUNNING)\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO yarn.Client: \u001B[0m\n", "\u001B[34m#011 client token: N/A\u001B[0m\n", "\u001B[34m#011 diagnostics: N/A\u001B[0m\n", "\u001B[34m#011 ApplicationMaster host: 10.0.68.168\u001B[0m\n", "\u001B[34m#011 ApplicationMaster RPC port: -1\u001B[0m\n", "\u001B[34m#011 queue: default\u001B[0m\n", "\u001B[34m#011 start time: 1608318597512\u001B[0m\n", "\u001B[34m#011 final status: UNDEFINED\u001B[0m\n", "\u001B[34m#011 tracking URL: http://algo-1:8088/proxy/application_1608318579307_0001/\u001B[0m\n", "\u001B[34m#011 user: root\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO cluster.YarnClientSchedulerBackend: Application application_1608318579307_0001 has started running.\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1608318579307_0001_000001 container=null queue=default clusterResource= type=OFF_SWITCH requestedPartition=\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.1202846 absoluteUsedCapacity=0.1202846 used= cluster=\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO rmcontainer.RMContainerImpl: container_1608318579307_0001_01_000002 Container Transitioned from NEW to ALLOCATED\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO resourcemanager.RMAuditLogger: USER=root#011OPERATION=AM Allocated Container#011TARGET=SchedulerApp#011RESULT=SUCCESS#011APPID=application_1608318579307_0001#011CONTAINERID=container_1608318579307_0001_01_000002#011RESOURCE=#011QUEUENAME=default\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.8178279 absoluteUsedCapacity=0.8178279 used= cluster=\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO capacity.CapacityScheduler: Allocation proposal accepted\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37447.\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO netty.NettyBlockTransferService: Server created on 10.0.68.168:37447\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.68.168, 37447, None)\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.0.68.168:37447 with 1028.8 MB RAM, BlockManagerId(driver, 10.0.68.168, 37447, None)\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.68.168, 37447, None)\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.68.168, 37447, None)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:10 INFO util.SignalUtils: Registered signal handler for TERM\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:10 INFO util.SignalUtils: Registered signal handler for HUP\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:10 INFO util.SignalUtils: Registered signal handler for INT\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:10 INFO spark.SecurityManager: Changing view acls to: root\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:10 INFO spark.SecurityManager: Changing modify acls to: root\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:10 INFO spark.SecurityManager: Changing view acls groups to: \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:10 INFO spark.SecurityManager: Changing modify acls groups to: \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:10 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:11 INFO yarn.ApplicationMaster: Preparing Local resources\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:12 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1608318579307_0001_000001\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:12 INFO client.RMProxy: Connecting to ResourceManager at /10.0.68.168:8030\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:12 INFO yarn.YarnRMClient: Registering the ApplicationMaster\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:12 INFO client.TransportClientFactory: Successfully created connection to /10.0.68.168:35617 after 132 ms (0 ms spent in bootstraps)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:13 INFO yarn.ApplicationMaster: \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] ===============================================================================\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] YARN executor launch context:\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] env:\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] CLASSPATH -> /usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar{{PWD}}{{PWD}}/__spark_conf__{{PWD}}/__spark_libs__/*$HADOOP_CONF_DIR$HADOOP_COMMON_HOME/share/hadoop/common/*$HADOOP_COMMON_HOME/share/hadoop/common/lib/*$HADOOP_HDFS_HOME/share/hadoop/hdfs/*$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*$HADOOP_YARN_HOME/share/hadoop/yarn/*$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*{{PWD}}/__spark_conf__/__hadoop_conf__\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] SPARK_YARN_STAGING_DIR -> hdfs://10.0.68.168/user/root/.sparkStaging/application_1608318579307_0001\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] SPARK_NO_DAEMONIZE -> TRUE\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] SPARK_USER -> root\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] SPARK_MASTER_HOST -> 10.0.68.168\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] SPARK_HOME -> /usr/lib/spark\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] PYTHONPATH -> {{PWD}}/pyspark.zip{{PWD}}/py4j-0.10.7-src.zip\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] command:\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] LD_LIBRARY_PATH=\\\"/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH\\\" \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] {{JAVA_HOME}}/bin/java \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] -server \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] -Xmx4724m \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] '-verbose:gc' \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] '-XX:OnOutOfMemoryError=kill -9 %p' \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] '-XX:+PrintGCDetails' \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] '-XX:+PrintGCDateStamps' \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] '-XX:+UseParallelGC' \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] '-XX:InitiatingHeapOccupancyPercent=70' \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] '-XX:ConcGCThreads=1' \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] '-XX:ParallelGCThreads=3' \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] -Djava.io.tmpdir={{PWD}}/tmp \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] '-Dspark.driver.port=35617' \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] -Dspark.yarn.app.container.log.dir= \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] org.apache.spark.executor.CoarseGrainedExecutorBackend \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] --driver-url \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] spark://CoarseGrainedScheduler@10.0.68.168:35617 \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] --executor-id \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] --hostname \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] --cores \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 4 \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] --app-id \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] application_1608318579307_0001 \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] --user-class-path \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] file:$PWD/__app__.jar \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 1>/stdout \\ \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 2>/stderr\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] resources:\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] pyspark.zip -> resource { scheme: \"hdfs\" host: \"10.0.68.168\" port: -1 file: \"/user/root/.sparkStaging/application_1608318579307_0001/pyspark.zip\" } size: 596339 timestamp: 1608318595764 type: FILE visibility: PRIVATE\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] py4j-0.10.7-src.zip -> resource { scheme: \"hdfs\" host: \"10.0.68.168\" port: -1 file: \"/user/root/.sparkStaging/application_1608318579307_0001/py4j-0.10.7-src.zip\" } size: 42437 timestamp: 1608318595789 type: FILE visibility: PRIVATE\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] __spark_libs__ -> resource { scheme: \"hdfs\" host: \"10.0.68.168\" port: -1 file: \"/user/root/.sparkStaging/application_1608318579307_0001/__spark_libs__6740149115475763856.zip\" } size: 416185616 timestamp: 1608318595232 type: ARCHIVE visibility: PRIVATE\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] __spark_conf__ -> resource { scheme: \"hdfs\" host: \"10.0.68.168\" port: -1 file: \"/user/root/.sparkStaging/application_1608318579307_0001/__spark_conf__.zip\" } size: 245104 timestamp: 1608318595974 type: ARCHIVE visibility: PRIVATE\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] ===============================================================================\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:13 INFO conf.Configuration: resource-types.xml not found\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:13 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:13 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000001/stderr] 20/12/18 19:10:13 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : algo-1:44075 for container : container_1608318579307_0001_01_000002\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO rmcontainer.RMContainerImpl: container_1608318579307_0001_01_000002 Container Transitioned from ALLOCATED to ACQUIRED\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json.\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@de69d2e{/metrics/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO ipc.Server: Auth successful for appattempt_1608318579307_0001_000001 (auth:SIMPLE)\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO containermanager.ContainerManagerImpl: Start request for container_1608318579307_0001_01_000002 by user root\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO nodemanager.NMAuditLogger: USER=root#011IP=10.0.68.168#011OPERATION=Start Container Request#011TARGET=ContainerManageImpl#011RESULT=SUCCESS#011APPID=application_1608318579307_0001#011CONTAINERID=container_1608318579307_0001_01_000002\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO application.ApplicationImpl: Adding container_1608318579307_0001_01_000002 to application application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO container.ContainerImpl: Container container_1608318579307_0001_01_000002 transitioned from NEW to LOCALIZING\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO container.ContainerImpl: Container container_1608318579307_0001_01_000002 transitioned from LOCALIZING to SCHEDULED\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO scheduler.ContainerScheduler: Starting container [container_1608318579307_0001_01_000002]\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO container.ContainerImpl: Container container_1608318579307_0001_01_000002 transitioned from SCHEDULED to RUNNING\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1608318579307_0001_01_000002\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1608318579307_0001/container_1608318579307_0001_01_000002/default_container_executor.sh]\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_Handling create event for file: /var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/prelaunch.out\u001B[0m\n", "\u001B[34mHandling create event for file: /var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/prelaunch.err\u001B[0m\n", "\u001B[34mHandling create event for file: /var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout\u001B[0m\n", "\u001B[34mHandling create event for file: /var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/usr/lib/spark/spark-warehouse').\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO internal.SharedState: Warehouse path is 'file:/usr/lib/spark/spark-warehouse'.\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL.\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@77a17751{/SQL,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/json.\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4ff6d2c1{/SQL/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/execution.\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@69fde785{/SQL/execution,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/execution/json.\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5fc54251{/SQL/execution/json,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:10:13 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /static/sql.\u001B[0m\n", "\u001B[34m20/12/18 19:10:14 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4e0dd962{/static/sql,null,AVAILABLE,@Spark}\u001B[0m\n", "\u001B[34m20/12/18 19:10:14 INFO rmcontainer.RMContainerImpl: container_1608318579307_0001_01_000002 Container Transitioned from ACQUIRED to RUNNING\u001B[0m\n", "\u001B[34m20/12/18 19:10:14 INFO monitor.ContainersMonitorImpl: container_1608318579307_0001_01_000002's ip = 10.0.68.168, and hostname = algo-1\u001B[0m\n", "\u001B[34m20/12/18 19:10:14 INFO monitor.ContainersMonitorImpl: Skipping monitoring container container_1608318579307_0001_01_000002 since CPU usage is not yet available.\u001B[0m\n", "\u001B[34m20/12/18 19:10:14 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint\u001B[0m\n", "\u001B[34m20/12/18 19:10:16 INFO Configuration.deprecation: fs.s3a.server-side-encryption-key is deprecated. Instead, use fs.s3a.server-side-encryption.key\u001B[0m\n", "\u001B[34m20/12/18 19:10:16 INFO datasources.InMemoryFileIndex: It took 61 ms to list leaf files for 1 paths.\u001B[0m\n", "\u001B[34m20/12/18 19:10:16 INFO scheduler.AppSchedulingInfo: checking for deactivate of application :application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:10:17 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.0.68.168:50612) with ID 1\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:15 INFO executor.CoarseGrainedExecutorBackend: Started daemon with process name: 1007@algo-1\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:15 INFO util.SignalUtils: Registered signal handler for TERM\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:15 INFO util.SignalUtils: Registered signal handler for HUP\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:15 INFO util.SignalUtils: Registered signal handler for INT\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:15 INFO spark.SecurityManager: Changing view acls to: root\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:15 INFO spark.SecurityManager: Changing modify acls to: root\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:15 INFO spark.SecurityManager: Changing view acls groups to: \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:15 INFO spark.SecurityManager: Changing modify acls groups to: \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:15 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:16 INFO client.TransportClientFactory: Successfully created connection to /10.0.68.168:35617 after 150 ms (0 ms spent in bootstraps)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:16 INFO spark.SecurityManager: Changing view acls to: root\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:16 INFO spark.SecurityManager: Changing modify acls to: root\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:16 INFO spark.SecurityManager: Changing view acls groups to: \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:16 INFO spark.SecurityManager: Changing modify acls groups to: \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:16 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:16 INFO client.TransportClientFactory: Successfully created connection to /10.0.68.168:35617 after 3 ms (0 ms spent in bootstraps)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:17 INFO storage.DiskBlockManager: Created local directory at /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1608318579307_0001/blockmgr-961c2be3-e4e4-4505-a0e9-ca2ed58976f4\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:17 INFO memory.MemoryStore: MemoryStore started with capacity 2.3 GB\u001B[0m\n", "\u001B[34m20/12/18 19:10:18 INFO storage.BlockManagerMasterEndpoint: Registering block manager algo-1:38497 with 2.3 GB RAM, BlockManagerId(1, algo-1, 38497, None)\u001B[0m\n", "\u001B[34m20/12/18 19:10:18 INFO datasources.FileSourceStrategy: Pruning directories with: \u001B[0m\n", "\u001B[34m20/12/18 19:10:18 INFO datasources.FileSourceStrategy: Post-Scan Filters: AtLeastNNulls(n, sex#0)\u001B[0m\n", "\u001B[34m20/12/18 19:10:18 INFO datasources.FileSourceStrategy: Output Data Schema: struct\u001B[0m\n", "\u001B[34m20/12/18 19:10:18 INFO execution.FileSourceScanExec: Pushed Filters: \u001B[0m\n", "\u001B[34m20/12/18 19:10:19 INFO codegen.CodeGenerator: Code generated in 403.615359 ms\u001B[0m\n", "\u001B[34m20/12/18 19:10:19 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 303.0 KB, free 1028.6 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:19 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.4 KB, free 1028.5 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:19 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.68.168:37447 (size: 27.4 KB, free: 1028.8 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:19 INFO spark.SparkContext: Created broadcast 0 from rdd at StringIndexer.scala:138\u001B[0m\n", "\u001B[34m20/12/18 19:10:19 INFO execution.FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes, number of split files: 1, prefetch: false\u001B[0m\n", "\u001B[34m20/12/18 19:10:19 INFO execution.FileSourceScanExec: relation: None, fileSplitsInPartitionHistogram: ArrayBuffer((1 fileSplits,1))\u001B[0m\n", "\u001B[34m20/12/18 19:10:19 INFO spark.SparkContext: Starting job: countByValue at StringIndexer.scala:140\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO scheduler.DAGScheduler: Registering RDD 7 (countByValue at StringIndexer.scala:140) as input to shuffle 0\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO scheduler.DAGScheduler: Got job 0 (countByValue at StringIndexer.scala:140) with 8 output partitions\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 (countByValue at StringIndexer.scala:140)\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage 0)\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[7] at countByValue at StringIndexer.scala:140), which has no missing parents\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO memory.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 20.1 KB, free 1028.5 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO memory.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 10.2 KB, free 1028.5 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.0.68.168:37447 (size: 10.2 KB, free: 1028.8 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1203\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[7] at countByValue at StringIndexer.scala:140) (first 15 tasks are for partitions Vector(0))\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO cluster.YarnScheduler: Adding task set 0.0 with 1 tasks\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8335 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:20 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on algo-1:38497 (size: 10.2 KB, free: 2.3 GB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:23 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on algo-1:38497 (size: 27.4 KB, free: 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:17 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: spark://Coars[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:14.507+0000: [GC (Allocation Failure) [PSYoungGen: 24576K->2956K(28160K)] 24576K->2964K(92672K), 0.0043774 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:14.646+0000: [GC (Allocation Failure) [PSYoungGen: 27532K->2904K(28160K)] 27540K->2920K(92672K), 0.0029711 secs] [Times: user=0.00 sys=0.01, real=0.00 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:14.862+0000: [GC (Allocation Failure) [PSYoungGen: 27480K->3583K(28160K)] 27496K->3753K(92672K), 0.0045051 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:15.082+0000: [GC (Allocation Failure) [PSYoungGen: 28159K->3507K(52736K)] 28329K->3685K(117248K), 0.0055209 secs] [Times: user=0.01 sys=0.01, real=0.01 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:15.307+0000: [GC (Allocation Failure) [PSYoungGen: 52659K->3559K(52736K)] 52837K->4759K(117248K), 0.0047821 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:15.692+0000: [GC (Allocation Failure) [PSYoungGen: 52711K->5110K(102912K)] 53911K->6625K(167424K), 0.0134342 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:16.181+0000: [GC (Allocation Failure) [PSYoungGen: 102902K->4893K(103424K)] 104417K->8042K(167936K), 0.0136387 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:16.577+0000: [GC (Allocation Failure) [PSYoungGen: 102685K->4359K(201728K)] 105834K->8773K(266240K), 0.0062159 secs] [Times: user=0.02 sys=0.00, real=0.00 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:16.726+0000: [GC (Metadata GC Threshold) [PSYoungGen: 52734K->2318K(201728K)] 57148K->24108K(266240K), 0.0135407 secs] [Times: user=0.02 sys=0.01, real=0.01 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:16.739+0000: [Full GC (Metadata GC Threshold) [PSYoungGen: 2318K->0K(201728K)] [ParOldGen: 21789K->22420K(60928K)] 24108K->22420K(262656K), [Metaspace: 20958K->20958K(1067008K)], 0.0299745 secs] [Times: user=0.07 sys=0.00, real=0.03 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:17.757+0000: [GC (Allocation Failure) [PSYoungGen: 195584K->6135K(292352K)] 218004K->45717K(353280K), 0.0263863 secs] [Times: user=0.05 sys=0.00, real=0.03 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:19.047+0000: [GC (Metadata GC Threshold) [PSYoungGen: 231155K->9737K(312320K)] 270738K->50891K(373248K), 0.0258435 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:19.073+0000: [Full GC (Metadata GC Threshold) [PSYoungGen: 9737K->0K(312320K)] [ParOldGen: 41154K->48951K(104448K)] 50891K->48951K(416768K), [Metaspace: 34909K->34909K(1079296K)], 0.0716082 secs] [Times: user=0.12 sys=0.01, real=0.08 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stdout] 2020-12-18T19:10:21.431+0000: [GC (Allocation Failure) [PSYoungGen: 299008K->12299K(412672K)] 347959K->77642K(517120K), 0.0269479 secs] [Times: user=0.07 sys=0.01, real=0.03 secs] \u001B[0m\n", "\u001B[34m[/var/log/yarn/ueGrainedScheduler@10.0.68.168:35617\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:17 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:17 INFO executor.Executor: Starting executor ID 1 on host algo-1\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:17 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38497.\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:17 INFO netty.NettyBlockTransferService: Server created on algo-1:38497\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:17 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:18 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(1, algo-1, 38497, None)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:18 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(1, algo-1, 38497, None)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:18 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(1, algo-1, 38497, None)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:20 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 0\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:20 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:20 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 1\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:20 INFO client.TransportClientFactory: Successfully created connection to /10.0.68.168:37447 after 5 ms (0 ms spent in bootstraps)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:20 INFO memory.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 10.2 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:20 INFO broadcast.TorrentBroadcast: Reading broadcast variable 1 took 163 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:20 INFO memory.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 20.1 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:20 INFO Configuration.deprecation: fs.s3a.server-side-encryption-key is deprecated. Instead, use fs.s3a.server-side-encryption.key\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:20 INFO executor.CoarseGrainedExecutorBackend: eagerFSInit: Eagerly initialized FileSystem at s3://does/not/exist in 3250 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:21 INFO codegen.CodeGenerator: Code generated in 519.496989 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:22 INFO codegen.CodeGenerator: Code generated in 20.693845 ms\u001B[0m\n", "\u001B[34m20/12/18 19:10:23 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 3769 ms on algo-1 (executor 1) (1/1)\u001B[0m\n", "\u001B[34m20/12/18 19:10:23 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool \u001B[0m\n", "\u001B[34m20/12/18 19:10:23 INFO scheduler.DAGScheduler: ShuffleMapStage 0 (countByValue at StringIndexer.scala:140) finished in 3.921 s\u001B[0m\n", "\u001B[34m20/12/18 19:10:23 INFO scheduler.DAGScheduler: looking for newly runnable stages\u001B[0m\n", "\u001B[34m20/12/18 19:10:23 INFO scheduler.DAGScheduler: running: Set()\u001B[0m\n", "\u001B[34m20/12/18 19:10:23 INFO scheduler.DAGScheduler: waiting: Set(ResultStage 1)\u001B[0m\n", "\u001B[34m20/12/18 19:10:23 INFO scheduler.DAGScheduler: failed: Set()\u001B[0m\n", "\u001B[34m20/12/18 19:10:23 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[8] at countByValue at StringIndexer.scala:140), which has no missing parents\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO memory.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.6 KB, free 1028.5 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO memory.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.2 KB, free 1028.5 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.0.68.168:37447 (size: 2.2 KB, free: 1028.8 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1203\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.DAGScheduler: Submitting 8 missing tasks from ResultStage 1 (ShuffledRDD[8] at countByValue at StringIndexer.scala:140) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7))\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO cluster.YarnScheduler: Adding task set 1.0 with 8 tasks\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 1.0 (TID 1, algo-1, executor 1, partition 1, NODE_LOCAL, 7673 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 1.0 (TID 2, algo-1, executor 1, partition 5, NODE_LOCAL, 7673 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 1.0 (TID 3, algo-1, executor 1, partition 6, NODE_LOCAL, 7673 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 4, algo-1, executor 1, partition 0, PROCESS_LOCAL, 7673 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on algo-1:38497 (size: 2.2 KB, free: 2.3 GB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.0.68.168:50612\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 1.0 (TID 5, algo-1, executor 1, partition 2, PROCESS_LOCAL, 7673 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 4) in 117 ms on algo-1 (executor 1) (1/8)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 1.0 (TID 6, algo-1, executor 1, partition 3, PROCESS_LOCAL, 7673 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 1.0 (TID 1) in 139 ms on algo-1 (executor 1) (2/8)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 1.0 (TID 7, algo-1, executor 1, partition 4, PROCESS_LOCAL, 7673 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 1.0 (TID 8, algo-1, executor 1, partition 7, PROCESS_LOCAL, 7673 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 1.0 (TID 5) in 29 ms on algo-1 (executor 1) (3/8)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 1.0 (TID 3) in 147 ms on algo-1 (executor 1) (4/8)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 1.0 (TID 2) in 160 ms on algo-1 (executor 1) (5/8)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 1.0 (TID 6) in 51 ms on algo-1 (executor 1) (6/8)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 1.0 (TID 8) in 40 ms on algo-1 (executor 1) (7/8)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 1.0 (TID 7) in 43 ms on algo-1 (executor 1) (8/8)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO cluster.YarnScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool \u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.DAGScheduler: ResultStage 1 (countByValue at StringIndexer.scala:140) finished in 0.207 s\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO scheduler.DAGScheduler: Job 0 finished: countByValue at StringIndexer.scala:140, took 4.537815 s\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO storage.BlockManagerInfo: Removed broadcast_2_piece0 on 10.0.68.168:37447 in memory (size: 2.2 KB, free: 1028.8 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:24 INFO storage.BlockManagerInfo: Removed broadcast_2_piece0 on algo-1:38497 in memory (size: 2.2 KB, free: 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:23 INFO datasources.FileScanRDD: TID: 0 - Reading current file: path: s3://sagemaker-eu-west-1-245582572290/sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone/abalone.csv, range: 0-191873, partition values: [empty row], isDataPresent: false\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:23 INFO codegen.CodeGenerator: Code generated in 13.954699 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:23 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 0\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:23 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.4 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:23 INFO broadcast.TorrentBroadcast: Reading broadcast variable 0 took 10 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:23 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 397.9 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:23 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 2022 bytes result sent to driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Running task 1.0 in stage 1.0 (TID 1)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO spark.MapOutputTrackerWorker: Updating epoch to 1 and clearing cache\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 2\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 2\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 3\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 4\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Running task 5.0 in stage 1.0 (TID 2)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Running task 6.0 in stage 1.0 (TID 3)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Running task 0.0 in stage 1.0 (TID 4)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO memory.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.2 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO broadcast.TorrentBroadcast: Reading broadcast variable 2 took 19 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO memory.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.6 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO spark.MapOutputTrackerWorker: Don't have map outputs for shuffle 0, fetching them\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO spark.MapOutputTrackerWorker: Don't have map outputs for shuffle 0, fetching them\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO spark.MapOutputTrackerWorker: Doing the fetch; tracker endpoint = NettyRpcEndpointRef(spark://MapOutputTracker@10.0.68.168:35617)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO spark.MapOutputTrackerWorker: Don't have map outputs for shuffle 0, fetching them\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO spark.MapOutputTrackerWorker: Don't have map outputs for shuffle 0, fetching them\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO spark.MapOutputTrackerWorker: Got the output locations\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Getting 0 non-empty blocks including 0 local blocks and 0 remote blocks\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 8 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 8 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 9 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Getting 1 non-empty blocks including 1 local blocks and 0 remote blocks\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Finished task 0.0 in stage 1.0 (TID 4). 1134 bytes result sent to driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 5\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Finished task 1.0 in stage 1.0 (TID 1). 1324 bytes result sent to driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Running task 2.0 in stage 1.0 (TID 5)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Finished task 5.0 in stage 1.0 (TID 2). 1281 bytes result sent to driver\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO datasources.FileSourceStrategy: Pruning directories with: \u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO datasources.FileSourceStrategy: Post-Scan Filters: \u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO datasources.FileSourceStrategy: Output Data Schema: struct\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO execution.FileSourceScanExec: Pushed Filters: \u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO codegen.CodeGenerator: Code generated in 147.418553 ms\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO memory.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 303.0 KB, free 1028.2 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO memory.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 27.4 KB, free 1028.2 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO storage.BlockManagerInfo: Added broadcast_3_piece0 in memory on 10.0.68.168:37447 (size: 27.4 KB, free: 1028.8 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO spark.SparkContext: Created broadcast 3 from javaToPython at NativeMethodAccessorImpl.java:0\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO execution.FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes, number of split files: 1, prefetch: false\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO execution.FileSourceScanExec: relation: None, fileSplitsInPartitionHistogram: ArrayBuffer((1 fileSplits,1))\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO io.HadoopMapRedCommitProtocol: Using output committer class org.apache.hadoop.mapred.FileOutputCommitter\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false\u001B[0m\n", "\u001B[34m20/12/18 19:10:25 INFO output.DirectFileOutputCommitter: Direct Write: DISABLED\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO spark.SparkContext: Starting job: runJob at SparkHadoopWriter.scala:78\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO scheduler.DAGScheduler: Got job 1 (runJob at SparkHadoopWriter.scala:78) with 1 output partitions\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO scheduler.DAGScheduler: Final stage: ResultStage 2 (runJob at SparkHadoopWriter.scala:78)\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO scheduler.DAGScheduler: Parents of final stage: List()\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO scheduler.DAGScheduler: Missing parents: List()\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[16] at saveAsTextFile at NativeMethodAccessorImpl.java:0), which has no missing parents\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO memory.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 134.8 KB, free 1028.0 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO memory.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 51.2 KB, free 1028.0 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO storage.BlockManagerInfo: Added broadcast_4_piece0 in memory on 10.0.68.168:37447 (size: 51.2 KB, free: 1028.7 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO spark.SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1203\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[16] at saveAsTextFile at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO cluster.YarnScheduler: Adding task set 2.0 with 1 tasks\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 (TID 9, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8346 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:26 INFO storage.BlockManagerInfo: Added broadcast_4_piece0 in memory on algo-1:38497 (size: 51.2 KB, free: 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Finished task 6.0 in stage 1.0 (TID 3). 1324 bytes result sent to driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Getting 0 non-empty blocks including 0 local blocks and 0 remote blocks\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Finished task 2.0 in stage 1.0 (TID 5). 1134 bytes result sent to driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 6\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Running task 3.0 in stage 1.0 (TID 6)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Getting 0 non-empty blocks including 0 local blocks and 0 remote blocks\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 7\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Running task 4.0 in stage 1.0 (TID 7)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 8\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Running task 7.0 in stage 1.0 (TID 8)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Getting 0 non-empty blocks including 0 local blocks and 0 remote blocks\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Finished task 3.0 in stage 1.0 (TID 6). 1134 bytes result sent to driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Getting 0 non-empty blocks including 0 local blocks and 0 remote blocks\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Finished task 7.0 in stage 1.0 (TID 8). 1134 bytes result sent to driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:24 INFO executor.Executor: Finished task 4.0 in stage 1.0 (TID 7). 1177 bytes result sent to driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:26 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 9\u001B[0m\n", "\u001B[34m20/12/18 19:10:27 INFO storage.BlockManagerInfo: Added broadcast_3_piece0 in memory on algo-1:38497 (size: 27.4 KB, free: 2.3 GB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:29 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 2.0 (TID 9) in 3322 ms on algo-1 (executor 1) (1/1)\u001B[0m\n", "\u001B[34m20/12/18 19:10:29 INFO cluster.YarnScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool \u001B[0m\n", "\u001B[34m20/12/18 19:10:29 INFO python.PythonAccumulatorV2: Connected to AccumulatorServer at host: 127.0.0.1 port: 37533\u001B[0m\n", "\u001B[34m20/12/18 19:10:29 INFO scheduler.DAGScheduler: ResultStage 2 (runJob at SparkHadoopWriter.scala:78) finished in 3.353 s\u001B[0m\n", "\u001B[34m20/12/18 19:10:29 INFO scheduler.DAGScheduler: Job 1 finished: runJob at SparkHadoopWriter.scala:78, took 3.357940 s\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:26 INFO executor.Executor: Running task 0.0 in stage 2.0 (TID 9)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:26 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 4\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:26 INFO memory.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 51.2 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:26 INFO broadcast.TorrentBroadcast: Reading broadcast variable 4 took 12 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:26 INFO memory.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 134.8 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:26 INFO codegen.CodeGenerator: Code generated in 149.068863 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:26 INFO codegen.CodeGenerator: Code generated in 74.062112 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:26 INFO codegen.CodeGenerator: Code generated in 18.846509 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO datasources.FileScanRDD: TID: 9 - Reading current file: path: s3://sagemaker-eu-west-1-245582572290/sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone/abalone.csv, range: 0-191873, partition values: [empty row], isDataPresent: false\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO io.HadoopMapRedCommitProtocol: Using output committer class org.apache.hadoop.mapred.FileOutputCommitter\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO output.DirectFileOutputCommitter: Direct Write: DISABLED\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO codegen.CodeGenerator: Code generated in 19.484053 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 3\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO memory.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 27.4 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO broadcast.TorrentBroadcast: Reading broadcast variable 3 took 19 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:27 INFO memory.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 397.9 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:28 INFO python.PythonRunner: Times: total = 1565, boot = 449, init = 619, finish = 497\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO io.SparkHadoopWriter: Job job_20201218191025_0016 committed.\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO datasources.FileSourceStrategy: Pruning directories with: \u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO datasources.FileSourceStrategy: Post-Scan Filters: \u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO datasources.FileSourceStrategy: Output Data Schema: struct\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO execution.FileSourceScanExec: Pushed Filters: \u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 69\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 74\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 89\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 68\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 67\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 66\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 83\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 80\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 90\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 73\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 76\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 81\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 84\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 86\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 78\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 87\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 71\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 88\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 75\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 79\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 82\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO storage.BlockManagerInfo: Removed broadcast_4_piece0 on algo-1:38497 in memory (size: 51.2 KB, free: 2.3 GB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO storage.BlockManagerInfo: Removed broadcast_4_piece0 on 10.0.68.168:37447 in memory (size: 51.2 KB, free: 1028.8 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO codegen.CodeGenerator: Code generated in 111.886603 ms\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO memory.MemoryStore: Block broadcast_5 stored as values in memory (estimated size 303.0 KB, free 1027.9 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 77\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 85\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 72\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.ContextCleaner: Cleaned accumulator 70\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO memory.MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 27.4 KB, free 1027.9 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in memory on 10.0.68.168:37447 (size: 27.4 KB, free: 1028.8 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.SparkContext: Created broadcast 5 from javaToPython at NativeMethodAccessorImpl.java:0\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO execution.FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes, number of split files: 1, prefetch: false\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO execution.FileSourceScanExec: relation: None, fileSplitsInPartitionHistogram: ArrayBuffer((1 fileSplits,1))\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO io.HadoopMapRedCommitProtocol: Using output committer class org.apache.hadoop.mapred.FileOutputCommitter\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO output.DirectFileOutputCommitter: Direct Write: DISABLED\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.SparkContext: Starting job: runJob at SparkHadoopWriter.scala:78\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO scheduler.DAGScheduler: Got job 2 (runJob at SparkHadoopWriter.scala:78) with 1 output partitions\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO scheduler.DAGScheduler: Final stage: ResultStage 3 (runJob at SparkHadoopWriter.scala:78)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO scheduler.DAGScheduler: Parents of final stage: List()\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO scheduler.DAGScheduler: Missing parents: List()\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO scheduler.DAGScheduler: Submitting ResultStage 3 (MapPartitionsRDD[24] at saveAsTextFile at NativeMethodAccessorImpl.java:0), which has no missing parents\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO memory.MemoryStore: Block broadcast_6 stored as values in memory (estimated size 134.8 KB, free 1027.7 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO memory.MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 51.2 KB, free 1027.7 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO storage.BlockManagerInfo: Added broadcast_6_piece0 in memory on 10.0.68.168:37447 (size: 51.2 KB, free: 1028.7 MB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO spark.SparkContext: Created broadcast 6 from broadcast at DAGScheduler.scala:1203\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 3 (MapPartitionsRDD[24] at saveAsTextFile at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO cluster.YarnScheduler: Adding task set 3.0 with 1 tasks\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 3.0 (TID 10, algo-1, executor 1, partition 0, PROCESS_LOCAL, 8346 bytes)\u001B[0m\n", "\u001B[34m20/12/18 19:10:30 INFO storage.BlockManagerInfo: Added broadcast_6_piece0 in memory on algo-1:38497 (size: 51.2 KB, free: 2.3 GB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:31 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in memory on algo-1:38497 (size: 27.4 KB, free: 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:29 INFO output.FileOutputCommitter: Saved output of task 'attempt_20201218191025_0016_m_000000_0' to s3://sagemaker-eu-west-1-245582572290/sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone/train\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:29 INFO mapred.SparkHadoopMapRedUtil: attempt_20201218191025_0016_m_000000_0: Committed\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:29 INFO executor.Executor: Finished task 0.0 in stage 2.0 (TID 9). 2994 bytes result sent to driver\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:30 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 10\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:30 INFO executor.Executor: Running task 0.0 in stage 3.0 (TID 10)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:30 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 6\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:30 INFO memory.MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 51.2 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:30 INFO broadcast.TorrentBroadcast: Reading broadcast variable 6 took 12 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:30 INFO memory.MemoryStore: Block broadcast_6 stored as values in memory (estimated size 134.8 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO codegen.CodeGenerator: Code generated in 75.741805 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO io.HadoopMapRedCommitProtocol: Using output committer class org.apache.hadoop.mapred.FileOutputCommitter\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO output.DirectFileOutputCommitter: Direct Write: DISABLED\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO datasources.FileScanRDD: TID: 10 - Reading current file: path: s3://sagemaker-eu-west-1-245582572290/sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone/abalone.csv, range: 0-191873, partition values: [empty row], isDataPresent: false\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 5\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO memory.MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 27.4 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO broadcast.TorrentBroadcast: Reading broadcast variable 5 took 12 ms\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_1608318579307_0001/container_1608318579307_0001_01_000002/stderr] 20/12/18 19:10:31 INFO memory.MemoryStore: Block broadcast_5 stored as values in memory (estimated size 397.9 KB, free 2.3 GB)\u001B[0m\n", "\u001B[34m20/12/18 19:10:32 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 3.0 (TID 10) in 1413 ms on algo-1 (executor 1) (1/1)\u001B[0m\n", "\u001B[34m20/12/18 19:10:32 INFO cluster.YarnScheduler: Removed TaskSet 3.0, whose tasks have all completed, from pool \u001B[0m\n", "\u001B[34m20/12/18 19:10:32 INFO scheduler.DAGScheduler: ResultStage 3 (runJob at SparkHadoopWriter.scala:78) finished in 1.433 s\u001B[0m\n", "\u001B[34m20/12/18 19:10:32 INFO scheduler.DAGScheduler: Job 2 finished: runJob at SparkHadoopWriter.scala:78, took 1.437870 s\u001B[0m\n", "\u001B[34m20/12/18 19:10:32 INFO io.SparkHadoopWriter: Job job_20201218191030_0024 committed.\u001B[0m\n", "\u001B[34m20/12/18 19:10:32 INFO spark.SparkContext: Invoking stop() from shutdown hook\u001B[0m\n", "\u001B[34m20/12/18 19:10:32 INFO server.AbstractConnector: Stopped Spark@611e2320{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}\u001B[0m\n", "\u001B[34m20/12/18 19:10:32 INFO ui.SparkUI: Stopped Spark web UI at http://10.0.68.168:4040\u001B[0m\n", "\u001B[34m20/12/18 19:10:32 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices\u001B[0m\n", "\u001B[34m(serviceOption=None,\n", " services=List(),\n", " started=false)\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO cluster.YarnClientSchedulerBackend: Stopped\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 WARN nio.NioEventLoop: Selector.select() returned prematurely 512 times in a row; rebuilding Selector io.netty.channel.nio.SelectedSelectionKeySetSelector@64c24d0a.\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO nio.NioEventLoop: Migrated 0 channel(s) to the new Selector.\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO memory.MemoryStore: MemoryStore cleared\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO storage.BlockManager: BlockManager stopped\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO storage.BlockManagerMaster: BlockManagerMaster stopped\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO spark.SparkContext: Successfully stopped SparkContext\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO util.ShutdownHookManager: Shutdown hook called\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-8cc12d82-ba7b-4941-9e14-9e6eb5423ae9\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-34ee3167-47a5-40d6-b567-33245a1ea730/pyspark-024c910c-ff83-4b5b-8a7f-b368bbef6db9\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-34ee3167-47a5-40d6-b567-33245a1ea730\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO attempt.RMAppAttemptImpl: Updating application attempt appattempt_1608318579307_0001_000001 with final state: FINISHING, and exit status: -1000\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO attempt.RMAppAttemptImpl: appattempt_1608318579307_0001_000001 State change from RUNNING to FINAL_SAVING on event = UNREGISTERED\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO rmapp.RMAppImpl: Updating application application_1608318579307_0001 with final state: FINISHING\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO recovery.RMStateStore: Updating info for app: application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO rmapp.RMAppImpl: application_1608318579307_0001 State change from RUNNING to FINAL_SAVING on event = ATTEMPT_UNREGISTERED\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO attempt.RMAppAttemptImpl: appattempt_1608318579307_0001_000001 State change from FINAL_SAVING to FINISHING on event = ATTEMPT_UPDATE_SAVED\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO rmapp.RMAppImpl: application_1608318579307_0001 State change from FINAL_SAVING to FINISHING on event = APP_UPDATE_SAVED\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO launcher.ContainerLaunch: Container container_1608318579307_0001_01_000002 succeeded \u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO container.ContainerImpl: Container container_1608318579307_0001_01_000002 transitioned from RUNNING to EXITED_WITH_SUCCESS\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO launcher.ContainerLaunch: Cleaning up container container_1608318579307_0001_01_000002\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1608318579307_0001/container_1608318579307_0001_01_000002\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO nodemanager.NMAuditLogger: USER=root#011OPERATION=Container Finished - Succeeded#011TARGET=ContainerImpl#011RESULT=SUCCESS#011APPID=application_1608318579307_0001#011CONTAINERID=container_1608318579307_0001_01_000002\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO container.ContainerImpl: Container container_1608318579307_0001_01_000002 transitioned from EXITED_WITH_SUCCESS to DONE\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO application.ApplicationImpl: Removing container_1608318579307_0001_01_000002 from application application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1608318579307_0001_01_000002\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1608318579307_0001\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO rmcontainer.RMContainerImpl: container_1608318579307_0001_01_000002 Container Transitioned from RUNNING to COMPLETED\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO resourcemanager.RMAuditLogger: USER=root#011OPERATION=AM Released Container#011TARGET=SchedulerApp#011RESULT=SUCCESS#011APPID=application_1608318579307_0001#011CONTAINERID=container_1608318579307_0001_01_000002#011RESOURCE=#011QUEUENAME=default\u001B[0m\n", "\u001B[34m20/12/18 19:10:33 INFO resourcemanager.ApplicationMasterService: application_1608318579307_0001 unregistered successfully. \u001B[0m\n", "\u001B[34m12-18 19:10 smspark-submit INFO spark submit was successful. primary node exiting.\u001B[0m\n", "\u001B[34m[/var/log/yarn/userlogs/application_\u001B[0m\n", "\n", "{\n", " \"___PySparkProcessor_latest_job_name\": \"sm-spark-2020-12-18-19-04-49-672\"\n", "}\n" ] } ], "source": [ "%%pyspark submit --logs --base_job_name sm-spark --arguments '--s3_input_bucket sagemaker-eu-west-1-245582572290 --s3_input_key_prefix sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone --s3_output_bucket sagemaker-eu-west-1-245582572290 --s3_output_key_prefix sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone'\n", "from __future__ import print_function\n", "from __future__ import unicode_literals\n", "\n", "import argparse\n", "import csv\n", "import os\n", "import shutil\n", "import sys\n", "import time\n", "\n", "import pyspark\n", "from pyspark.sql import SparkSession\n", "from pyspark.ml import Pipeline\n", "from pyspark.ml.feature import (\n", " OneHotEncoder,\n", " StringIndexer,\n", " VectorAssembler,\n", " VectorIndexer,\n", ")\n", "from pyspark.sql.functions import *\n", "from pyspark.sql.types import (\n", " DoubleType,\n", " StringType,\n", " StructField,\n", " StructType,\n", ")\n", "\n", "\n", "def csv_line(data):\n", " r = ','.join(str(d) for d in data[1])\n", " return str(data[0]) + \",\" + r\n", "\n", "\n", "def main():\n", " parser = argparse.ArgumentParser(description=\"app inputs and outputs\")\n", " parser.add_argument(\"--s3_input_bucket\", type=str, help=\"s3 input bucket\")\n", " parser.add_argument(\"--s3_input_key_prefix\", type=str, help=\"s3 input key prefix\")\n", " parser.add_argument(\"--s3_output_bucket\", type=str, help=\"s3 output bucket\")\n", " parser.add_argument(\"--s3_output_key_prefix\", type=str, help=\"s3 output key prefix\")\n", " args = parser.parse_args()\n", "\n", " spark = SparkSession.builder.appName(\"PySparkApp\").getOrCreate()\n", "\n", " # This is needed to save RDDs which is the only way to write nested Dataframes into CSV format\n", " spark.sparkContext._jsc.hadoopConfiguration().set(\"mapred.output.committer.class\",\n", " \"org.apache.hadoop.mapred.FileOutputCommitter\")\n", "\n", " # Defining the schema corresponding to the input data. The input data does not contain the headers\n", " schema = StructType([StructField(\"sex\", StringType(), True), \n", " StructField(\"length\", DoubleType(), True),\n", " StructField(\"diameter\", DoubleType(), True),\n", " StructField(\"height\", DoubleType(), True),\n", " StructField(\"whole_weight\", DoubleType(), True),\n", " StructField(\"shucked_weight\", DoubleType(), True),\n", " StructField(\"viscera_weight\", DoubleType(), True), \n", " StructField(\"shell_weight\", DoubleType(), True), \n", " StructField(\"rings\", DoubleType(), True)])\n", "\n", " # Downloading the data from S3 into a Dataframe\n", " total_df = spark.read.csv(('s3://' + os.path.join(args.s3_input_bucket, args.s3_input_key_prefix,\n", " 'abalone.csv')), header=False, schema=schema)\n", "\n", " #StringIndexer on the sex column which has categorical value\n", " sex_indexer = StringIndexer(inputCol=\"sex\", outputCol=\"indexed_sex\")\n", " \n", " #one-hot-encoding is being performed on the string-indexed sex column (indexed_sex)\n", " sex_encoder = OneHotEncoder(inputCol=\"indexed_sex\", outputCol=\"sex_vec\")\n", "\n", " #vector-assembler will bring all the features to a 1D vector for us to save easily into CSV format\n", " assembler = VectorAssembler(inputCols=[\"sex_vec\", \n", " \"length\", \n", " \"diameter\", \n", " \"height\", \n", " \"whole_weight\", \n", " \"shucked_weight\", \n", " \"viscera_weight\", \n", " \"shell_weight\"], \n", " outputCol=\"features\")\n", " \n", " # The pipeline comprises of the steps added above\n", " pipeline = Pipeline(stages=[sex_indexer, sex_encoder, assembler])\n", " \n", " # This step trains the feature transformers\n", " model = pipeline.fit(total_df)\n", " \n", " # This step transforms the dataset with information obtained from the previous fit\n", " transformed_total_df = model.transform(total_df)\n", " \n", " # Split the overall dataset into 80-20 training and validation\n", " (train_df, validation_df) = transformed_total_df.randomSplit([0.8, 0.2])\n", " \n", " # Convert the train dataframe to RDD to save in CSV format and upload to S3\n", " train_rdd = train_df.rdd.map(lambda x: (x.rings, x.features))\n", " train_lines = train_rdd.map(csv_line)\n", " train_lines.saveAsTextFile('s3://' + os.path.join(args.s3_output_bucket, args.s3_output_key_prefix, 'train'))\n", " \n", " # Convert the validation dataframe to RDD to save in CSV format and upload to S3\n", " validation_rdd = validation_df.rdd.map(lambda x: (x.rings, x.features))\n", " validation_lines = validation_rdd.map(csv_line)\n", " validation_lines.saveAsTextFile('s3://' + os.path.join(args.s3_output_bucket, args.s3_output_key_prefix, 'validation'))\n", "\n", "\n", "if __name__ == \"__main__\":\n", " main()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Stop latest processing Job" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " \"AppSpecification\": {\n", " \"ContainerArguments\": [\n", " \"'--s3_input_bucket'\",\n", " \"'sagemaker-eu-west-1-245582572290'\",\n", " \"'--s3_input_key_prefix'\",\n", " \"'sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone'\",\n", " \"'--s3_output_bucket'\",\n", " \"'sagemaker-eu-west-1-245582572290'\",\n", " \"'sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone'\"\n", " ],\n", " \"ContainerEntrypoint\": [\n", " \"smspark-submit\",\n", " \"/opt/ml/processing/input/code/tmp-2bb9dee7-1e13-4010-ab48-2deba9da0094.py\"\n", " ],\n", " \"ImageUri\": \"571004829621.dkr.ecr.eu-west-1.amazonaws.com/sagemaker-spark-processing:2.4-cpu\"\n", " },\n", " \"CreationTime\": \"2020-12-17 18:57:03.608000+00:00\",\n", " \"Environment\": {},\n", " \"LastModifiedTime\": \"2020-12-17 18:57:22.834000+00:00\",\n", " \"ProcessingInputs\": [\n", " {\n", " \"AppManaged\": false,\n", " \"InputName\": \"code\",\n", " \"S3Input\": {\n", " \"LocalPath\": \"/opt/ml/processing/input/code\",\n", " \"S3CompressionType\": \"None\",\n", " \"S3DataDistributionType\": \"FullyReplicated\",\n", " \"S3DataType\": \"S3Prefix\",\n", " \"S3InputMode\": \"File\",\n", " \"S3Uri\": \"s3://sagemaker-eu-west-1-245582572290/sm-spark-2020-12-17-18-57-03-062/input/code/tmp-2bb9dee7-1e13-4010-ab48-2deba9da0094.py\"\n", " }\n", " }\n", " ],\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-17-18-57-03-062\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-17-18-57-03-062\",\n", " \"ProcessingJobStatus\": \"Stopping\",\n", " \"ProcessingResources\": {\n", " \"ClusterConfig\": {\n", " \"InstanceCount\": 1,\n", " \"InstanceType\": \"ml.c4.xlarge\",\n", " \"VolumeSizeInGB\": 30\n", " }\n", " },\n", " \"ResponseMetadata\": {\n", " \"HTTPHeaders\": {\n", " \"content-length\": \"1798\",\n", " \"content-type\": \"application/x-amz-json-1.1\",\n", " \"date\": \"Thu, 17 Dec 2020 18:57:22 GMT\",\n", " \"x-amzn-requestid\": \"5766dc7a-3c8c-4dc6-a04e-772c615b3abe\"\n", " },\n", " \"HTTPStatusCode\": 200,\n", " \"RequestId\": \"5766dc7a-3c8c-4dc6-a04e-772c615b3abe\",\n", " \"RetryAttempts\": 0\n", " },\n", " \"RoleArn\": \"arn:aws:iam::245582572290:role/workshop-sagemaker\",\n", " \"StoppingCondition\": {\n", " \"MaxRuntimeInSeconds\": 1200\n", " }\n", "}\n" ] } ], "source": [ "%pyspark delete" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Describe latest processing Job" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " \"AppSpecification\": {\n", " \"ContainerArguments\": [\n", " \"--s3_input_bucket\",\n", " \"sagemaker-eu-west-1-245582572290\",\n", " \"--s3_input_key_prefix\",\n", " \"sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/raw/abalone\",\n", " \"--s3_output_bucket\",\n", " \"sagemaker-eu-west-1-245582572290\",\n", " \"--s3_output_key_prefix\",\n", " \"sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone\"\n", " ],\n", " \"ContainerEntrypoint\": [\n", " \"smspark-submit\",\n", " \"/opt/ml/processing/input/code/tmp-02acb435-c907-4d1c-9e8f-bb0bd5bc5582.py\"\n", " ],\n", " \"ImageUri\": \"571004829621.dkr.ecr.eu-west-1.amazonaws.com/sagemaker-spark-processing:2.4-cpu\"\n", " },\n", " \"CreationTime\": \"2020-12-18 19:04:50.120000+00:00\",\n", " \"Environment\": {},\n", " \"LastModifiedTime\": \"2020-12-18 19:10:38.739000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-18 19:10:38.737000+00:00\",\n", " \"ProcessingInputs\": [\n", " {\n", " \"AppManaged\": false,\n", " \"InputName\": \"code\",\n", " \"S3Input\": {\n", " \"LocalPath\": \"/opt/ml/processing/input/code\",\n", " \"S3CompressionType\": \"None\",\n", " \"S3DataDistributionType\": \"FullyReplicated\",\n", " \"S3DataType\": \"S3Prefix\",\n", " \"S3InputMode\": \"File\",\n", " \"S3Uri\": \"s3://sagemaker-eu-west-1-245582572290/sm-spark-2020-12-18-19-04-49-672/input/code/tmp-02acb435-c907-4d1c-9e8f-bb0bd5bc5582.py\"\n", " }\n", " }\n", " ],\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-18-19-04-49-672\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-18-19-04-49-672\",\n", " \"ProcessingJobStatus\": \"Completed\",\n", " \"ProcessingResources\": {\n", " \"ClusterConfig\": {\n", " \"InstanceCount\": 1,\n", " \"InstanceType\": \"ml.c4.xlarge\",\n", " \"VolumeSizeInGB\": 30\n", " }\n", " },\n", " \"ProcessingStartTime\": \"2020-12-18 19:09:19.619000+00:00\",\n", " \"ResponseMetadata\": {\n", " \"HTTPHeaders\": {\n", " \"content-length\": \"1698\",\n", " \"content-type\": \"application/x-amz-json-1.1\",\n", " \"date\": \"Mon, 21 Dec 2020 12:01:36 GMT\",\n", " \"x-amzn-requestid\": \"dfd12b3b-c962-4a26-96c3-25dbdba8328f\"\n", " },\n", " \"HTTPStatusCode\": 200,\n", " \"RequestId\": \"dfd12b3b-c962-4a26-96c3-25dbdba8328f\",\n", " \"RetryAttempts\": 0\n", " },\n", " \"RoleArn\": \"arn:aws:iam::245582572290:role/workshop-sagemaker\",\n", " \"StoppingCondition\": {\n", " \"MaxRuntimeInSeconds\": 1200\n", " }\n", "}\n" ] } ], "source": [ "%pyspark status" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## List processing jobs" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{\n", " \"NextToken\": \"cIws2QhTXUIa8bi8X9aU7gCAR0Xdc3x9L/Ofg4vsVMTtcNqRqLcpBqE42+cDc29TFQi5WMns8YJ3nEv3nZkSmUmnBY0xgo1e/hoJ+sIFCZ7+RpBLRYj09ElNRWC8SrP8c41w+DHUtr+4lKQAoT9hd9TTBxl9pmgioXWtSwHWKuCo/f5Vfe8UUgbAAtr41ZVH4c5TrGHpjouaACJ8UDkh0MDULBPv/d83dxOOTMWDVCVj/0ytPuelGYpiLJKgc83zhlXqYUKIBv9W0YA732xprs5J3FK0yllAjjSkE3by9UW66XYmSpom5qL6CNGJK6e2Ffc77HZSXQz8E2J4cQTGtZgtcajeqk6XZcZHRbQSmKd43UMM5+AcjtfvZTBJSNrk9a9ysrPf5csKJOH6R5SXsDOTlVX73YemxbbU9bpmaAk8KkeuYBrBC1+85Q0krRrdyAuB/kZxy0AyCyNuW1Wi6WZGZadSrAVJFIiKXTuyocxjNXWmQHgFhWAOdLRMx/2BFmtZzk8xTj9K6q72UOnvayvZ4FIRAt8sLQN50Vr46SFSM5B/fY+T4j/CPgWhXLY=\",\n", " \"ProcessingJobSummaries\": [\n", " {\n", " \"CreationTime\": \"2020-12-18 19:04:50.120000+00:00\",\n", " \"LastModifiedTime\": \"2020-12-18 19:10:38.739000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-18 19:10:38.737000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-18-19-04-49-672\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-18-19-04-49-672\",\n", " \"ProcessingJobStatus\": \"Completed\"\n", " },\n", " {\n", " \"CreationTime\": \"2020-12-18 18:35:35.424000+00:00\",\n", " \"FailureReason\": \"AlgorithmError: See job logs for more information\",\n", " \"LastModifiedTime\": \"2020-12-18 18:40:03.727000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-18 18:40:03.724000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-18-18-35-34-940\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-18-18-35-34-940\",\n", " \"ProcessingJobStatus\": \"Failed\"\n", " },\n", " {\n", " \"CreationTime\": \"2020-12-18 16:49:28.625000+00:00\",\n", " \"FailureReason\": \"AlgorithmError: See job logs for more information\",\n", " \"LastModifiedTime\": \"2020-12-18 16:54:20.471000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-18 16:54:20.468000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-18-16-49-28-157\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-18-16-49-28-157\",\n", " \"ProcessingJobStatus\": \"Failed\"\n", " },\n", " {\n", " \"CreationTime\": \"2020-12-18 16:20:18.214000+00:00\",\n", " \"FailureReason\": \"AlgorithmError: See job logs for more information\",\n", " \"LastModifiedTime\": \"2020-12-18 16:24:49.131000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-18 16:24:49.127000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-18-16-20-17-703\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-18-16-20-17-703\",\n", " \"ProcessingJobStatus\": \"Failed\"\n", " },\n", " {\n", " \"CreationTime\": \"2020-12-17 18:57:48.461000+00:00\",\n", " \"FailureReason\": \"AlgorithmError: See job logs for more information\",\n", " \"LastModifiedTime\": \"2020-12-17 19:02:22.988000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-17 19:02:22.985000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-17-18-57-47-993\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-17-18-57-47-993\",\n", " \"ProcessingJobStatus\": \"Failed\"\n", " },\n", " {\n", " \"CreationTime\": \"2020-12-17 18:57:03.608000+00:00\",\n", " \"LastModifiedTime\": \"2020-12-17 19:01:12.690000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-17 19:01:12.687000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-17-18-57-03-062\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-17-18-57-03-062\",\n", " \"ProcessingJobStatus\": \"Stopped\"\n", " },\n", " {\n", " \"CreationTime\": \"2020-12-17 18:56:04.674000+00:00\",\n", " \"LastModifiedTime\": \"2020-12-17 19:00:34.152000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-17 19:00:34.149000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-17-18-56-04-197\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-17-18-56-04-197\",\n", " \"ProcessingJobStatus\": \"Completed\"\n", " },\n", " {\n", " \"CreationTime\": \"2020-12-17 18:40:15.485000+00:00\",\n", " \"LastModifiedTime\": \"2020-12-17 18:44:25.453000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-17 18:44:25.450000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-17-18-40-15-067\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-17-18-40-15-067\",\n", " \"ProcessingJobStatus\": \"Completed\"\n", " },\n", " {\n", " \"CreationTime\": \"2020-12-17 18:27:20.802000+00:00\",\n", " \"LastModifiedTime\": \"2020-12-17 18:31:32.586000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-17 18:31:32.583000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-17-18-27-20-391\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-17-18-27-20-391\",\n", " \"ProcessingJobStatus\": \"Completed\"\n", " },\n", " {\n", " \"CreationTime\": \"2020-12-17 18:26:59.950000+00:00\",\n", " \"LastModifiedTime\": \"2020-12-17 18:31:17.584000+00:00\",\n", " \"ProcessingEndTime\": \"2020-12-17 18:31:17.582000+00:00\",\n", " \"ProcessingJobArn\": \"arn:aws:sagemaker:eu-west-1:245582572290:processing-job/sm-spark-2020-12-17-18-26-59-476\",\n", " \"ProcessingJobName\": \"sm-spark-2020-12-17-18-26-59-476\",\n", " \"ProcessingJobStatus\": \"Completed\"\n", " }\n", " ],\n", " \"ResponseMetadata\": {\n", " \"HTTPHeaders\": {\n", " \"content-length\": \"3933\",\n", " \"content-type\": \"application/x-amz-json-1.1\",\n", " \"date\": \"Fri, 18 Dec 2020 19:32:59 GMT\",\n", " \"x-amzn-requestid\": \"38bb8605-5d66-4bb6-82bc-93b42e6e565f\"\n", " },\n", " \"HTTPStatusCode\": 200,\n", " \"RequestId\": \"38bb8605-5d66-4bb6-82bc-93b42e6e565f\",\n", " \"RetryAttempts\": 0\n", " }\n", "}\n" ] } ], "source": [ "%pyspark list" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Validate Data Processing Results\n", "\n", "Next, validate the output of our data preprocessing job by looking at the first 5 rows of the output dataset." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "6.0,0.0,0.0,0.29,0.21,0.075,0.275,0.113,0.0675,0.035\n", "5.0,0.0,0.0,0.29,0.225,0.075,0.14,0.0515,0.0235,0.04\n", "7.0,0.0,0.0,0.305,0.225,0.07,0.1485,0.0585,0.0335,0.045\n", "7.0,0.0,0.0,0.305,0.23,0.08,0.156,0.0675,0.0345,0.048\n", "9.0,0.0,0.0,0.33,0.26,0.08,0.2,0.0625,0.05,0.07\n" ] } ], "source": [ "!aws s3 cp --quiet s3://sagemaker-eu-west-1-245582572290/sagemaker/spark-preprocess-demo/2020-12-17-17-19-06/input/preprocessed/abalone/train/part-00000 - | head -n5" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "SageMakerMagic (lblokhin/33)", "language": "python", "name": "sm__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-1:245582572290:image-version/lblokhin/33" }, "language_info": { "codemirror_mode": { "name": "python", "version": 3 }, "mimetype": "text/x-python", "name": "sm_kernel", "pygments_lexer": "python" } }, "nbformat": 4, "nbformat_minor": 4 }