+ POD_NAME=tensorflow-benchmarks-efa-worker-0 + shift + /opt/kube/kubectl exec tensorflow-benchmarks-efa-worker-0 -- /bin/sh -c orted -mca ess "env" -mca ess_base_jobid "1761607680" -mca ess_base_vpid 1 -mca ess_base_num_procs "3" -mca orte_node_regex "tensorflow-benchmarks-efa-launcher-[3:265]kr,tensorflow-benchmarks-efa-worker-[1:0-1]@0(3)" -mca orte_hnp_uri "1761607680.0;tcp://192.168.58.30:34911" --mca plm_rsh_no_tree_spawn "1" --mca pml "ob1" --mca btl_vader_single_copy_mechanism "none" --mca btl_tcp_if_exclude "lo,docker0" -mca plm "rsh" -mca orte_default_hostfile "/etc/mpi/hostfile" -mca plm_rsh_agent "/etc/mpi/kubexec.sh" -mca hwloc_base_binding_policy "none" -mca rmaps_base_mapping_policy "slot" -mca pmix "^s1,s2,cray,isolated" + POD_NAME=tensorflow-benchmarks-efa-worker-1 + shift + /opt/kube/kubectl exec tensorflow-benchmarks-efa-worker-1 -- /bin/sh -c orted -mca ess "env" -mca ess_base_jobid "1761607680" -mca ess_base_vpid 2 -mca ess_base_num_procs "3" -mca orte_node_regex "tensorflow-benchmarks-efa-launcher-[3:265]kr,tensorflow-benchmarks-efa-worker-[1:0-1]@0(3)" -mca orte_hnp_uri "1761607680.0;tcp://192.168.58.30:34911" --mca plm_rsh_no_tree_spawn "1" --mca pml "ob1" --mca btl_vader_single_copy_mechanism "none" --mca btl_tcp_if_exclude "lo,docker0" -mca plm "rsh" -mca orte_default_hostfile "/etc/mpi/hostfile" -mca plm_rsh_agent "/etc/mpi/kubexec.sh" -mca hwloc_base_binding_policy "none" -mca rmaps_base_mapping_policy "slot" -mca pmix "^s1,s2,cray,isolated" WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term 2020-07-09 23:08:29.838750: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.838544: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.838895: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.838585: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.838690: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.838736: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.839053: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.838929: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.839092: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.839022: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.839138: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.839214: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.839278: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.839353: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.839359: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.839408: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F 2020-07-09 23:08:29.848016: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848013: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848025: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848040: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848043: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848141: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x584d720 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848160: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848130: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4cefe30 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848150: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848176: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5476ff0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848202: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848196: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4127f20 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848219: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848265: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4596ac0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848288: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848340: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848455: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848500: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5b10ea0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848523: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848582: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5158ec0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848605: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848608: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848607: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848673: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848787: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3d863c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848814: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848779: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x488d4f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848800: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848867: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.848850: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x50b26c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848877: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.848977: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x554b060 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.848999: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.849090: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.849223: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x57b9740 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.849248: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.849301: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.849426: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3f11520 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.849449: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.849505: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.849690: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x480b160 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.849720: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.849726: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.849839: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4f43fd0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.849870: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.849926: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499995000 Hz 2020-07-09 23:08:29.850041: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x523fdf0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:29.850062: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 23:08:29.850577: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.850719: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.850739: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.850730: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.850785: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.850900: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.850948: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.851252: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.851521: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.851521: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.851571: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.851643: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.851834: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.852249: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.852427: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:29.852469: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-09 23:08:31.751111: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.753540: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.754160: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5069dc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.754203: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.756096: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.759122: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x482f570 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.759157: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.761017: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.761382: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.763039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1c.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.763150: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.765334: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.766992: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.767160: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.767297: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.768510: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1b.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.768582: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.768832: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3ca5940 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.768863: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.769033: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.769932: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.770153: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.770199: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.770763: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.771687: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.771981: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.772180: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3f12b60 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.772212: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.773654: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.773934: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.774049: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.774739: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.775016: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.777747: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5554fb0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.777780: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.778035: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:19.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.778107: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.778239: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.778346: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.779656: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.779854: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.780870: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.781113: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.781389: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.783004: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.784023: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.785265: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.785794: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1d.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.785876: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.787366: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.787579: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.787685: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.788673: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.788939: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.789550: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.789550: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.790596: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.791585: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.791960: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.793705: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:18.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.793794: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.794617: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5180310 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.794647: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.795105: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.795211: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.795452: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.796814: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.797094: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.798583: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 6 2020-07-09 23:08:31.798646: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.798641: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.799198: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.799693: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.801907: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.802664: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x57c1e20 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.802696: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.802882: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x46cfde0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.802916: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.803169: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.803272: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.805868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 5 2020-07-09 23:08:31.805935: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.806142: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.806260: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.809099: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.813511: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1a.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.813582: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.815086: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.816432: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.816700: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.816729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 3 2020-07-09 23:08:31.816785: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.816961: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.818259: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.819255: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.819673: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:16.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.819732: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.820021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:17.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.820088: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.821300: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.821722: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.822266: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 7 2020-07-09 23:08:31.822313: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.822573: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.822622: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.822717: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.822837: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.823012: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.823277: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.824498: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.824950: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.825516: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.825935: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.827764: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 2 2020-07-09 23:08:31.827821: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.828983: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.829084: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.829451: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.829558: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.835430: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.844159: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.845752: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.851412: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 4 2020-07-09 23:08:31.851473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.861207: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-07-09 23:08:31.861264: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.862560: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 1 2020-07-09 23:08:31.862612: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.904340: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.909788: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x515b690 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.909822: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.911157: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.915280: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.915395: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1d.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.915462: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.917157: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.918786: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.919114: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.920885: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.921952: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.921948: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.923608: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.924771: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.925636: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.925752: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.927315: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.927613: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.927686: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5851900 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.927715: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.928011: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.933269: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.939135: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4c727f0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.939170: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.940477: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5b16100 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.940510: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.942219: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x41986c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.942249: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.945260: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.945388: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.946267: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.947659: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.949576: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4847f20 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.949601: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.949973: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x44ea8e0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.949999: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.950683: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5159cc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-09 23:08:31.950709: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-SXM2-32GB, Compute Capability 7.0 2020-07-09 23:08:31.951780: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1b.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.951806: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.951858: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.952106: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.953439: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.953480: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.955238: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.955514: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.957140: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.958125: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.959319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:17.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.959407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 7 2020-07-09 23:08:31.959418: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.959463: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.959703: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1a.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.959767: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.960632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1c.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.960727: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.960953: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.961303: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.961942: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.962040: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.962236: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.962369: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.962612: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.962670: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.962889: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.963510: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.963802: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.964264: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.964486: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.965283: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.965403: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.965450: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:16.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.965484: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.965539: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.965721: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:19.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.965819: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.965894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:18.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:31.965981: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.966378: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.967032: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.967396: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.967495: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:31.967922: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.968302: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.968586: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.968713: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.968768: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:31.968801: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.968909: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.968930: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.969033: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.969054: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.969064: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:31.969875: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.969994: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.970200: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.970722: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.970791: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:31.971237: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.971728: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.971818: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:31.974609: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.974723: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.974767: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 5 2020-07-09 23:08:31.974832: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.975079: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.975182: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.975255: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:31.975376: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.978047: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.978410: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.981785: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.990111: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.990956: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.991231: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:31.994291: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 1 2020-07-09 23:08:31.994358: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.994758: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 4 2020-07-09 23:08:31.994816: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:31.996946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 6 2020-07-09 23:08:31.997052: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:32.007142: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-07-09 23:08:32.007190: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:32.008011: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 2 2020-07-09 23:08:32.008063: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:32.008181: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 3 2020-07-09 23:08:32.008255: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:32.091027: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.091095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 5 2020-07-09 23:08:32.091103: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 5: N 2020-07-09 23:08:32.091420: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.093191: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.094966: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 5, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1b.0, compute capability: 7.0) 2020-07-09 23:08:32.099383: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.099417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 3 2020-07-09 23:08:32.099425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 3: N 2020-07-09 23:08:32.099718: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== 2020-07-09 23:08:32.101461: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero Generating training model 2020-07-09 23:08:32.102657: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.102697: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 6 2020-07-09 23:08:32.102707: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 6: N 2020-07-09 23:08:32.103153: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.103425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 3, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:19.0, compute capability: 7.0) 2020-07-09 23:08:32.105097: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.106444: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.106475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 7 2020-07-09 23:08:32.106483: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 7: N 2020-07-09 23:08:32.107211: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.108402: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 6, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1c.0, compute capability: 7.0) TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model 2020-07-09 23:08:32.109750: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.111441: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 7, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1d.0, compute capability: 7.0) TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.116297 22365250090816 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model 2020-07-09 23:08:32.117361: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.117417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 4 2020-07-09 23:08:32.117427: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 4: N 2020-07-09 23:08:32.117740: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.118074 22365250090816 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. 2020-07-09 23:08:32.119496: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.121172: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 4, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1a.0, compute capability: 7.0) 2020-07-09 23:08:32.121241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.121275: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 1 2020-07-09 23:08:32.121283: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 1: N 2020-07-09 23:08:32.121314: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.121377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 2020-07-09 23:08:32.121385: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N 2020-07-09 23:08:32.121577: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.121665: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.122877 23324635936576 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.124563 23324635936576 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. 2020-07-09 23:08:32.124996: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.125161: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model 2020-07-09 23:08:32.128338: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 1, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:17.0, compute capability: 7.0) 2020-07-09 23:08:32.128499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 0, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:16.0, compute capability: 7.0) WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.129904 22740969011008 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.131316 22740969011008 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.131792 22426221500224 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.134186 22426221500224 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model Generating training model 2020-07-09 23:08:32.135606: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.135654: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 2 2020-07-09 23:08:32.135665: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 2: N 2020-07-09 23:08:32.136040: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.138642: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.140519: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 2, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:18.0, compute capability: 7.0) WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.140829 22670838220608 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.142311 22670838220608 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.143227 22365250090816 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.147804 23290081347392 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.148790 22701052753728 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.148968 23324635936576 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.149226 23290081347392 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.150348 22701052753728 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.155251 22740969011008 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.165677 22451508512576 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.165997 22426221500224 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.166762 22670838220608 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.167844 22451508512576 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.173017 23290081347392 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.174907 22701052753728 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.201663 22451508512576 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. 2020-07-09 23:08:32.225531: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.225585: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 7 2020-07-09 23:08:32.225594: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 7: N 2020-07-09 23:08:32.225907: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.227985: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.229668: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 7, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1d.0, compute capability: 7.0) TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.252199 22562318976832 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.253838 22562318976832 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. 2020-07-09 23:08:32.258621: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.258660: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 1 2020-07-09 23:08:32.258668: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 1: N 2020-07-09 23:08:32.258952: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.260496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.260528: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 4 2020-07-09 23:08:32.260536: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 4: N 2020-07-09 23:08:32.260803: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.260821: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.260825: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.260853: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 6 2020-07-09 23:08:32.260861: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 6: N 2020-07-09 23:08:32.261232: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.261902: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.261932: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 2020-07-09 23:08:32.261940: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N 2020-07-09 23:08:32.262347: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.263076: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.263107: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 2 2020-07-09 23:08:32.263115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 2: N 2020-07-09 23:08:32.264213: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.264242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 3 2020-07-09 23:08:32.264249: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 3: N 2020-07-09 23:08:32.264297: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.265476: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.267383: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.267519: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 1, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:17.0, compute capability: 7.0) 2020-07-09 23:08:32.268831: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.272326: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model 2020-07-09 23:08:32.273621: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.274616: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:32.276061: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 4, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1a.0, compute capability: 7.0) 2020-07-09 23:08:32.276510: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 6, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1c.0, compute capability: 7.0) 2020-07-09 23:08:32.279903: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 0, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:16.0, compute capability: 7.0) 2020-07-09 23:08:32.280153: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 2, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:18.0, compute capability: 7.0) 2020-07-09 23:08:32.280327: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 3, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:19.0, compute capability: 7.0) WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.280413 22562318976832 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model Generating training model 2020-07-09 23:08:32.284760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:32.284802: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 5 2020-07-09 23:08:32.284814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 5: N 2020-07-09 23:08:32.285177: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model Generating training model WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.287271 23072062064448 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. 2020-07-09 23:08:32.287841: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.288963 23072062064448 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. 2020-07-09 23:08:32.290210: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 5, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1b.0, compute capability: 7.0) WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.295303 23105172625216 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.295417 23266222626624 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.296655 23105172625216 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.296792 23266222626624 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. TensorFlow: 2.1 Model: resnet101 Dataset: imagenet (synthetic) Mode: training SingleSess: False Batch size: 1024 global 64 per device Num batches: 100 Num epochs: 0.08 Devices: ['horovod/gpu:0', 'horovod/gpu:1', 'horovod/gpu:2', 'horovod/gpu:3', 'horovod/gpu:4', 'horovod/gpu:5', 'horovod/gpu:6', 'horovod/gpu:7', 'horovod/gpu:8', 'horovod/gpu:9', 'horovod/gpu:10', 'horovod/gpu:11', 'horovod/gpu:12', 'horovod/gpu:13', 'horovod/gpu:14', 'horovod/gpu:15'] NUMA bind: False Data format: NCHW Optimizer: sgd Variables: horovod ========== Generating training model WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.299243 22784175241024 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.299399 22697700075328 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.299912 23202770114368 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.300790 22784175241024 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.300887 22697700075328 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.301488 23202770114368 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.313420 23072062064448 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. W0709 23:08:32.317310 22880304547648 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:134: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.keras.layers.Conv2D` instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. W0709 23:08:32.319693 22880304547648 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.320439 23105172625216 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.320607 23266222626624 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.324807 22784175241024 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.325066 22697700075328 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.325609 23202770114368 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. W0709 23:08:32.346562 22880304547648 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/convnet_builder.py:266: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.MaxPooling2D instead. Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph Initializing graph WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.060723 22365250090816 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.062372 22670838220608 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.090646 22426221500224 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession Initializing graph WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.125458 22740969011008 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.126241 23324635936576 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.137298 22451508512576 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.140671 22701052753728 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.223103 23266222626624 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.242090 22784175241024 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.253168 23105172625216 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.253196 23202770114368 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.292979 23072062064448 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.306128 22697700075328 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.336972 22880304547648 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.529053 22562318976832 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession 2020-07-09 23:08:36.692507: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.694323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1a.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.694375: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.694404: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.694415: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.694426: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.694437: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.694448: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.694460: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.694522: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.696229: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.697339: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.698429: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 4 2020-07-09 23:08:36.698471: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.698479: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 4 2020-07-09 23:08:36.698485: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 4: N 2020-07-09 23:08:36.699407: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.700232: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1b.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.700285: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.700316: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.700327: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.700338: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.700349: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.700359: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.700371: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.700438: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.702118: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.703940: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.705668: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 4, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1a.0, compute capability: 7.0) 2020-07-09 23:08:36.706876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 5 2020-07-09 23:08:36.706914: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.706937: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 5 2020-07-09 23:08:36.706944: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 5: N 2020-07-09 23:08:36.707219: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.708900: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.710541: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 5, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1b.0, compute capability: 7.0) 2020-07-09 23:08:36.720096: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.721819: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1c.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.721864: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.721897: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.721908: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.721919: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.721930: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.721940: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.721951: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.722010: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.723738: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.725394: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 6 2020-07-09 23:08:36.725422: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.725430: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 6 2020-07-09 23:08:36.725435: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 6: N 2020-07-09 23:08:36.725570: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero WARNING:tensorflow:From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession W0709 23:08:36.725779 23290081347392 deprecation.py:323] From /workspace/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py:2268: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession 2020-07-09 23:08:36.727251: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.728895: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 6, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1c.0, compute capability: 7.0) 2020-07-09 23:08:36.782878: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.784741: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:18.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.784794: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.784824: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.784835: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.784846: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.784857: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.784867: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.784878: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.784937: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.786601: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.788241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 2 2020-07-09 23:08:36.788273: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.788281: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 2 2020-07-09 23:08:36.788286: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 2: N 2020-07-09 23:08:36.788423: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.790082: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.791729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 2, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:18.0, compute capability: 7.0) 2020-07-09 23:08:36.794620: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.796386: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:19.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.796439: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.796471: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.796482: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.796492: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.796503: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.796512: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.796523: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.796585: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.798253: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.799903: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 3 2020-07-09 23:08:36.799940: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.799946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 3 2020-07-09 23:08:36.799952: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 3: N 2020-07-09 23:08:36.800097: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.801762: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.803417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 3, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:19.0, compute capability: 7.0) 2020-07-09 23:08:36.814116: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.815907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:17.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.815961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.815992: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.816003: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.816013: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.816023: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.816032: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.816042: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.816102: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.817638: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.817771: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.821098: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1d.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.821155: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.821170: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 1 2020-07-09 23:08:36.821187: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.821198: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.821208: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.821220: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.821231: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.821242: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.821207: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.821216: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 1 2020-07-09 23:08:36.821224: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 1: N 2020-07-09 23:08:36.821308: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.821385: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.824753: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.824843: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.828116: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 7 2020-07-09 23:08:36.828154: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.828162: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 7 2020-07-09 23:08:36.828169: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 7: N 2020-07-09 23:08:36.828215: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 1, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:17.0, compute capability: 7.0) 2020-07-09 23:08:36.828332: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.830029: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.831688: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 7, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1d.0, compute capability: 7.0) 2020-07-09 23:08:36.858153: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.859930: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1c.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.859976: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.860010: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.860021: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.860031: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.860042: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.860051: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.860061: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.860111: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.861846: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.863491: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 6 2020-07-09 23:08:36.863523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.863530: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 6 2020-07-09 23:08:36.863536: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 6: N 2020-07-09 23:08:36.863675: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.865335: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.866994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 6, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1c.0, compute capability: 7.0) 2020-07-09 23:08:36.889579: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.892019: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1a.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.892078: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.892121: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.892135: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.892148: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.892160: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.892172: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.892185: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.892248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.894177: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.895832: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 4 2020-07-09 23:08:36.895864: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.895871: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 4 2020-07-09 23:08:36.895877: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 4: N 2020-07-09 23:08:36.896020: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.897681: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.898790: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.899806: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 4, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1a.0, compute capability: 7.0) 2020-07-09 23:08:36.901095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:18.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.901156: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.901193: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.901206: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.901216: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.901227: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.901236: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.901247: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.901306: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.903032: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.904666: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 2 2020-07-09 23:08:36.904696: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.904703: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 2 2020-07-09 23:08:36.904708: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 2: N 2020-07-09 23:08:36.904846: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.906539: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.906626: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.909806: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 2, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:18.0, compute capability: 7.0) 2020-07-09 23:08:36.909976: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:19.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.910035: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.910076: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.910105: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.910118: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.910132: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.910154: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.910168: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.910232: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.912043: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.913701: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 3 2020-07-09 23:08:36.913731: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.913738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 3 2020-07-09 23:08:36.913744: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 3: N 2020-07-09 23:08:36.913884: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.915602: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.917265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 3, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:19.0, compute capability: 7.0) 2020-07-09 23:08:36.960288: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.962108: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:16.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.962155: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.962188: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.962198: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.962208: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.962219: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.962228: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.962239: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.962291: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.963994: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.965622: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-07-09 23:08:36.965651: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.965658: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 2020-07-09 23:08:36.965664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N 2020-07-09 23:08:36.965692: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.965802: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.969127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:17.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.969188: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.969220: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.969231: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.969242: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.969253: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.969263: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.969255: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.969275: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.969333: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.972942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 0, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:16.0, compute capability: 7.0) 2020-07-09 23:08:36.973059: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.974729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 1 2020-07-09 23:08:36.974763: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:36.974771: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 1 2020-07-09 23:08:36.974777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 1: N 2020-07-09 23:08:36.974924: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.976641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.978305: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 1, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:17.0, compute capability: 7.0) 2020-07-09 23:08:36.997843: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:36.999615: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1b.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:36.999674: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:36.999710: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:36.999721: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:36.999732: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:36.999742: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:36.999752: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:36.999762: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:36.999834: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.001578: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.003220: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 5 2020-07-09 23:08:37.003252: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:37.003260: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 5 2020-07-09 23:08:37.003266: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 5: N 2020-07-09 23:08:37.003415: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.005081: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.006743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 5, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1b.0, compute capability: 7.0) 2020-07-09 23:08:37.183263: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.185028: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:1d.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:37.185080: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:37.185117: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:37.185128: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:37.185139: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:37.185149: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:37.185159: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:37.185170: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:37.185233: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.186921: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.189561: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 7 2020-07-09 23:08:37.189598: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:37.189605: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 7 2020-07-09 23:08:37.189612: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 7: N 2020-07-09 23:08:37.189807: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.192857: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.196599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 7, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:1d.0, compute capability: 7.0) 2020-07-09 23:08:37.379190: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.380951: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:16.0 name: Tesla V100-SXM2-32GB computeCapability: 7.0 coreClock: 1.53GHz coreCount: 80 deviceMemorySize: 31.72GiB deviceMemoryBandwidth: 836.37GiB/s 2020-07-09 23:08:37.381002: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2020-07-09 23:08:37.381033: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:37.381045: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2020-07-09 23:08:37.381056: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2020-07-09 23:08:37.381067: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2020-07-09 23:08:37.381078: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2020-07-09 23:08:37.381088: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:37.381151: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.382831: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.384570: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-07-09 23:08:37.384612: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-09 23:08:37.384620: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 2020-07-09 23:08:37.384626: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N 2020-07-09 23:08:37.384783: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.386467: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-07-09 23:08:37.388146: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30525 MB memory) -> physical GPU (device: 0, name: Tesla V100-SXM2-32GB, pci bus id: 0000:00:16.0, compute capability: 7.0) INFO:tensorflow:Running local_init_op. I0709 23:08:39.304350 22365250090816 session_manager.py:504] Running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.359054 22670838220608 session_manager.py:504] Running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.383267 22426221500224 session_manager.py:504] Running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.408304 22451508512576 session_manager.py:504] Running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.423796 22365250090816 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.479883 23324635936576 session_manager.py:504] Running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.499810 22426221500224 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.526510 22451508512576 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.533604 22670838220608 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.571515 22740969011008 session_manager.py:504] Running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.579821 22701052753728 session_manager.py:504] Running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.610413 23324635936576 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.658591 22562318976832 session_manager.py:504] Running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.684272 23266222626624 session_manager.py:504] Running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.704710 22740969011008 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.708743 22701052753728 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.748303 22784175241024 session_manager.py:504] Running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.761878 22562318976832 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.798194 23202770114368 session_manager.py:504] Running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.810250 23266222626624 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.851886 23072062064448 session_manager.py:504] Running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.873574 23105172625216 session_manager.py:504] Running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.873765 22784175241024 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.887573 22697700075328 session_manager.py:504] Running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:39.903532 22880304547648 session_manager.py:504] Running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.923585 23202770114368 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.978496 23072062064448 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:39.999476 23105172625216 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:40.013711 22697700075328 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:40.029826 22880304547648 session_manager.py:507] Done running local_init_op. INFO:tensorflow:Running local_init_op. I0709 23:08:40.212810 23290081347392 session_manager.py:504] Running local_init_op. INFO:tensorflow:Done running local_init_op. I0709 23:08:40.352023 23290081347392 session_manager.py:507] Done running local_init_op. Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up Running warm up 2020-07-09 23:08:45.443400: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:45.686158: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:45.695777: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:45.701839: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:45.747537: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:45.837040: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:45.946813: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:45.964512: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:45.969542: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:45.980956: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:45.984480: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:46.006595: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.029312: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:46.034849: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:46.035093: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.037645: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.049265: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:46.057877: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:46.058854: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:46.173487: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2020-07-09 23:08:46.179399: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.281561: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.316729: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.342333: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.349468: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.353306: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.371305: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.397094: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.412600: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.422767: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.422806: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-09 23:08:46.519923: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Bootstrap : Using [0]eth0:192.168.47.63<0> tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Using network AWS Libfabric NCCL version 2.6.4+cuda10.1 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Bootstrap : Using [0]eth0:192.168.47.63<0> tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Bootstrap : Using [0]eth0:192.168.47.63<0> tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Bootstrap : Using [0]eth0:192.168.47.63<0> tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Bootstrap : Using [0]eth0:192.168.47.63<0> tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Bootstrap : Using [0]eth0:192.168.47.63<0> tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Bootstrap : Using [0]eth0:192.168.47.63<0> tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Bootstrap : Using [0]eth0:192.168.47.63<0> tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Bootstrap : Using [0]eth0:192.168.43.64<0> tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Bootstrap : Using [0]eth0:192.168.43.64<0> tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Bootstrap : Using [0]eth0:192.168.43.64<0> tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Bootstrap : Using [0]eth0:192.168.43.64<0> tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Bootstrap : Using [0]eth0:192.168.43.64<0> tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Bootstrap : Using [0]eth0:192.168.43.64<0> tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Bootstrap : Using [0]eth0:192.168.43.64<0> tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Bootstrap : Using [0]eth0:192.168.43.64<0> tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1. tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI Forcing AWS OFI ndev 4 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI Selected Provider is efa tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v3 symbol. tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Using network AWS Libfabric tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI [2] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI [2] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI [2] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI [0] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI [0] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI [3] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI [3] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI [3] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI [1] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI [1] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI [1] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI [1] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI [3] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI [2] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI [4] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI [4] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI [4] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI [4] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI [0] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI [0] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI [5] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI [5] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI [5] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI [5] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI [6] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI [6] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI [6] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI [6] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI [7] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI [7] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI [7] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI [7] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI [0] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI [0] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI [0] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI [2] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI [2] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI [2] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI [1] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI [1] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI [1] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI [1] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI [4] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI [4] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI [3] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI [3] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI [3] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI [3] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI [0] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI [2] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI [4] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI [5] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI [4] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI [5] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI [5] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI [6] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI [5] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI [6] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI [7] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI [7] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI [6] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI [7] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI [7] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI [6] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI [7] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI [7] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI [7] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO NET/OFI [7] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI [6] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI [6] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI [6] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO NET/OFI [6] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI [1] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI [1] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI [1] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO NET/OFI [1] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI [3] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI [3] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI [3] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI [0] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO NET/OFI [3] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI [0] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI [0] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO NET/OFI [0] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI [2] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI [2] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI [2] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO NET/OFI [2] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI [5] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI [5] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI [5] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO NET/OFI [5] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI [4] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI [4] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI [4] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO NET/OFI [4] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI [3] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI [3] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI [3] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO NET/OFI [3] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI [5] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI [5] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI [5] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO NET/OFI [5] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI [4] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI [4] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI [4] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO NET/OFI [4] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI [0] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI [0] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI [0] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO NET/OFI [0] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI [1] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI [1] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI [1] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO NET/OFI [1] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI [2] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI [2] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI [2] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO NET/OFI [2] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI [6] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI [6] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI [6] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO NET/OFI [6] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI [7] getCudaPath dev 0 busId 0000:00:16.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI [7] getCudaPath dev 1 busId 0000:00:17.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI [7] getCudaPath dev 2 busId 0000:00:18.0 path /sys/devices/pci0000:00 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO NET/OFI [7] getCudaPath dev 3 busId 0000:00:19.0 path /sys/devices/pci0000:00/ tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Trees [0] 7/-1/-1->6->5|5->6->7/-1/-1 [1] 5/-1/-1->6->7|7->6->5/-1/-1 [2] 5/-1/-1->6->7|7->6->5/-1/-1 [3] 7/13/-1->6->5|5->6->7/13/-1 [4] 7/-1/-1->6->5|5->6->7/-1/-1 [5] 5/-1/-1->6->7|7->6->5/-1/-1 [6] 5/-1/-1->6->7|7->6->5/-1/-1 [7] 7/-1/-1->6->5|5->6->7/-1/-1 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Trees [0] 6/-1/-1->5->1|1->5->6/-1/-1 [1] -1/-1/-1->5->6|6->5->-1/-1/-1 [2] 1/-1/-1->5->6|6->5->1/-1/-1 [3] 6/-1/-1->5->-1|-1->5->6/-1/-1 [4] 6/-1/-1->5->1|1->5->6/-1/-1 [5] -1/-1/-1->5->6|6->5->-1/-1/-1 [6] 1/-1/-1->5->6|6->5->1/-1/-1 [7] 6/-1/-1->5->14|14->5->6/-1/-1 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Trees [0] 4/-1/-1->7->6|6->7->4/-1/-1 [1] 6/-1/-1->7->4|4->7->6/-1/-1 [2] 6/12/-1->7->4|4->7->6/12/-1 [3] 4/-1/-1->7->6|6->7->4/-1/-1 [4] 4/-1/-1->7->6|6->7->4/-1/-1 [5] 6/-1/-1->7->4|4->7->6/-1/-1 [6] 6/-1/-1->7->4|4->7->6/-1/-1 [7] 4/-1/-1->7->6|6->7->4/-1/-1 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Trees [0] 11/-1/-1->8->3|3->8->11/-1/-1 [1] 12/-1/-1->8->11|11->8->12/-1/-1 [2] -1/-1/-1->8->11|11->8->-1/-1/-1 [3] 11/-1/-1->8->12|12->8->11/-1/-1 [4] 11/-1/-1->8->-1|-1->8->11/-1/-1 [5] 12/-1/-1->8->11|11->8->12/-1/-1 [6] -1/-1/-1->8->11|11->8->-1/-1/-1 [7] 11/-1/-1->8->12|12->8->11/-1/-1 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Trees [0] 9/-1/-1->10->11|11->10->9/-1/-1 [1] 11/-1/-1->10->9|9->10->11/-1/-1 [2] 11/-1/-1->10->9|9->10->11/-1/-1 [3] 9/-1/-1->10->11|11->10->9/-1/-1 [4] 9/-1/-1->10->11|11->10->9/-1/-1 [5] 11/1/-1->10->9|9->10->11/1/-1 [6] 11/-1/-1->10->9|9->10->11/-1/-1 [7] 9/-1/-1->10->11|11->10->9/-1/-1 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Trees [0] 13/-1/-1->9->10|10->9->13/-1/-1 [1] 10/-1/-1->9->2|2->9->10/-1/-1 [2] 10/-1/-1->9->13|13->9->10/-1/-1 [3] -1/-1/-1->9->10|10->9->-1/-1/-1 [4] 13/-1/-1->9->10|10->9->13/-1/-1 [5] 10/-1/-1->9->-1|-1->9->10/-1/-1 [6] 10/-1/-1->9->13|13->9->10/-1/-1 [7] -1/-1/-1->9->10|10->9->-1/-1/-1 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Trees [0] 10/-1/-1->11->8|8->11->10/-1/-1 [1] 8/-1/-1->11->10|10->11->8/-1/-1 [2] 8/-1/-1->11->10|10->11->8/-1/-1 [3] 10/-1/-1->11->8|8->11->10/-1/-1 [4] 10/0/-1->11->8|8->11->10/0/-1 [5] 8/-1/-1->11->10|10->11->8/-1/-1 [6] 8/-1/-1->11->10|10->11->8/-1/-1 [7] 10/-1/-1->11->8|8->11->10/-1/-1 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Trees [0] -1/-1/-1->12->15|15->12->-1/-1/-1 [1] 15/-1/-1->12->8|8->12->15/-1/-1 [2] 15/-1/-1->12->7|7->12->15/-1/-1 [3] 8/-1/-1->12->15|15->12->8/-1/-1 [4] -1/-1/-1->12->15|15->12->-1/-1/-1 [5] 15/-1/-1->12->8|8->12->15/-1/-1 [6] 15/-1/-1->12->-1|-1->12->15/-1/-1 [7] 8/-1/-1->12->15|15->12->8/-1/-1 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Trees [0] 14/-1/-1->13->9|9->13->14/-1/-1 [1] -1/-1/-1->13->14|14->13->-1/-1/-1 [2] 9/-1/-1->13->14|14->13->9/-1/-1 [3] 14/-1/-1->13->6|6->13->14/-1/-1 [4] 14/-1/-1->13->9|9->13->14/-1/-1 [5] -1/-1/-1->13->14|14->13->-1/-1/-1 [6] 9/-1/-1->13->14|14->13->9/-1/-1 [7] 14/-1/-1->13->-1|-1->13->14/-1/-1 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Trees [0] 15/-1/-1->14->13|13->14->15/-1/-1 [1] 13/-1/-1->14->15|15->14->13/-1/-1 [2] 13/-1/-1->14->15|15->14->13/-1/-1 [3] 15/-1/-1->14->13|13->14->15/-1/-1 [4] 15/-1/-1->14->13|13->14->15/-1/-1 [5] 13/-1/-1->14->15|15->14->13/-1/-1 [6] 13/-1/-1->14->15|15->14->13/-1/-1 [7] 15/5/-1->14->13|13->14->15/5/-1 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Trees [0] 12/-1/-1->15->14|14->15->12/-1/-1 [1] 14/-1/-1->15->12|12->15->14/-1/-1 [2] 14/-1/-1->15->12|12->15->14/-1/-1 [3] 12/-1/-1->15->14|14->15->12/-1/-1 [4] 12/-1/-1->15->14|14->15->12/-1/-1 [5] 14/-1/-1->15->12|12->15->14/-1/-1 [6] 14/4/-1->15->12|12->15->14/4/-1 [7] 12/-1/-1->15->14|14->15->12/-1/-1 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Channel 00/08 : 0 3 2 1 5 6 7 4 8 11 10 9 13 14 15 12 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Channel 01/08 : 0 4 7 6 5 9 10 11 8 12 15 14 13 1 2 3 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Channel 02/08 : 0 1 5 6 10 11 15 12 8 9 13 14 2 3 7 4 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Channel 03/08 : 0 1 2 6 5 4 7 11 8 9 10 14 13 12 15 3 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Channel 04/08 : 0 3 2 1 5 6 7 4 8 11 10 9 13 14 15 12 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Channel 05/08 : 0 4 7 6 5 9 10 11 8 12 15 14 13 1 2 3 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Channel 06/08 : 0 1 5 6 10 11 15 12 8 9 13 14 2 3 7 4 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Channel 07/08 : 0 1 2 6 5 4 7 11 8 9 10 14 13 12 15 3 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Trees [0] 3/-1/-1->0->-1|-1->0->3/-1/-1 [1] 4/-1/-1->0->3|3->0->4/-1/-1 [2] -1/-1/-1->0->3|3->0->-1/-1/-1 [3] 3/-1/-1->0->4|4->0->3/-1/-1 [4] 3/-1/-1->0->11|11->0->3/-1/-1 [5] 4/-1/-1->0->3|3->0->4/-1/-1 [6] -1/-1/-1->0->3|3->0->-1/-1/-1 [7] 3/-1/-1->0->4|4->0->3/-1/-1 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Trees [0] 5/-1/-1->1->2|2->1->5/-1/-1 [1] 2/-1/-1->1->-1|-1->1->2/-1/-1 [2] 2/-1/-1->1->5|5->1->2/-1/-1 [3] -1/-1/-1->1->2|2->1->-1/-1/-1 [4] 5/-1/-1->1->2|2->1->5/-1/-1 [5] 2/-1/-1->1->10|10->1->2/-1/-1 [6] 2/-1/-1->1->5|5->1->2/-1/-1 [7] -1/-1/-1->1->2|2->1->-1/-1/-1 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Trees [0] 1/-1/-1->2->3|3->2->1/-1/-1 [1] 3/9/-1->2->1|1->2->3/9/-1 [2] 3/-1/-1->2->1|1->2->3/-1/-1 [3] 1/-1/-1->2->3|3->2->1/-1/-1 [4] 1/-1/-1->2->3|3->2->1/-1/-1 [5] 3/-1/-1->2->1|1->2->3/-1/-1 [6] 3/-1/-1->2->1|1->2->3/-1/-1 [7] 1/-1/-1->2->3|3->2->1/-1/-1 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Trees [0] 2/8/-1->3->0|0->3->2/8/-1 [1] 0/-1/-1->3->2|2->3->0/-1/-1 [2] 0/-1/-1->3->2|2->3->0/-1/-1 [3] 2/-1/-1->3->0|0->3->2/-1/-1 [4] 2/-1/-1->3->0|0->3->2/-1/-1 [5] 0/-1/-1->3->2|2->3->0/-1/-1 [6] 0/-1/-1->3->2|2->3->0/-1/-1 [7] 2/-1/-1->3->0|0->3->2/-1/-1 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO threadThresholds 8/8/64 | 128/8/64 | 8/8/64 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Trees [0] -1/-1/-1->4->7|7->4->-1/-1/-1 [1] 7/-1/-1->4->0|0->4->7/-1/-1 [2] 7/-1/-1->4->-1|-1->4->7/-1/-1 [3] 0/-1/-1->4->7|7->4->0/-1/-1 [4] -1/-1/-1->4->7|7->4->-1/-1/-1 [5] 7/-1/-1->4->0|0->4->7/-1/-1 [6] 7/-1/-1->4->15|15->4->7/-1/-1 [7] 0/-1/-1->4->7|7->4->0/-1/-1 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 00 : 5[1b0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 00 : 6[1c0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 00 : 7[1d0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 00 : 12[1a0] -> 0[160] [receive] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 00 : 1[170] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 00 : 2[180] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 00 : 3[190] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 00 : 4[1a0] -> 8[160] [receive] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 00 : 4[1a0] -> 8[160] [send] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 00 : 9[170] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 00 : 10[180] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 00 : 11[190] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 00 : 13[1b0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 00 : 14[1c0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 00 : 15[1d0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 00 : 12[1a0] -> 0[160] [send] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 00 : 6[1c0] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 00 : 5[1b0] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 00 : 7[1d0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 00 : 1[170] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 00 : 2[180] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 01 : 6[1c0] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 00 : 9[170] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 00 : 10[180] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 00 : 13[1b0] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 00 : 14[1c0] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 00 : 15[1d0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 01 : 13[1b0] -> 1[170] [receive] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 01 : 5[1b0] -> 9[170] [send] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 01 : 14[1c0] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 00 : 0[160] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 01 : 5[1b0] -> 9[170] [receive] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 01 : 13[1b0] -> 1[170] [send] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 01 : 2[180] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 00 : 8[160] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 00 : 8[160] -> 3[190] [receive] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 00 : 11[190] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 01 : 10[180] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 01 : 1[170] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 01 : 9[170] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 00 : 3[190] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 00 : 12[1a0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 00 : 4[1a0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 01 : 12[1a0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 01 : 15[1d0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 01 : 4[1a0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 01 : 7[1d0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 01 : 14[1c0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 01 : 15[1d0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 01 : 6[1c0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 01 : 7[1d0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 01 : 0[160] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 01 : 11[190] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 01 : 10[180] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 01 : 4[1a0] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 00 : 8[160] -> 3[190] [send] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 02 : 7[1d0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 01 : 13[1b0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 02 : 13[1b0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 00 : 3[190] -> 8[160] [receive] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 01 : 5[1b0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 02 : 14[1c0] -> 2[180] [send] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 02 : 5[1b0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 02 : 6[1c0] -> 10[180] [send] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 00 : 3[190] -> 8[160] [send] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 01 : 3[190] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 01 : 8[160] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 01 : 9[170] -> 2[180] [send] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 01 : 0[160] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 01 : 3[190] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 01 : 11[190] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 01 : 12[1a0] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 01 : 8[160] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 02 : 4[1a0] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 01 : 9[170] -> 2[180] [receive] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 02 : 0[160] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 02 : 3[190] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 02 : 4[1a0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 02 : 15[1d0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 02 : 11[190] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 02 : 6[1c0] -> 10[180] [receive] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 02 : 12[1a0] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 02 : 8[160] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 02 : 12[1a0] -> 7[1d0] [receive] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 02 : 12[1a0] -> 7[1d0] [send] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 01 : 2[180] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 02 : 1[170] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 02 : 10[180] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 01 : 2[180] -> 9[170] [receive] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 02 : 0[160] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 02 : 11[190] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 02 : 5[1b0] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 01 : 2[180] -> 9[170] [send] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 02 : 9[170] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 02 : 14[1c0] -> 2[180] [receive] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 02 : 8[160] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 02 : 13[1b0] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 02 : 6[1c0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 02 : 6[1c0] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 02 : 7[1d0] -> 12[1a0] [receive] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 02 : 12[1a0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 02 : 10[180] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 02 : 2[180] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 02 : 3[190] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 02 : 11[190] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 02 : 9[170] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 02 : 7[1d0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 03 : 8[160] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 03 : 4[1a0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 03 : 6[1c0] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 03 : 10[180] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 03 : 7[1d0] -> 11[190] [receive] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 03 : 9[170] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 02 : 7[1d0] -> 12[1a0] [send] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 03 : 7[1d0] -> 11[190] [send] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 03 : 11[190] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 02 : 14[1c0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 03 : 8[160] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 02 : 14[1c0] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 02 : 15[1d0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 03 : 13[1b0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 02 : 2[180] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 03 : 14[1c0] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 02 : 3[190] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 03 : 12[1a0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 03 : 10[180] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 03 : 15[1d0] -> 3[190] [send] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 02 : 1[170] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 03 : 0[160] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 03 : 15[1d0] -> 3[190] [receive] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 03 : 8[160] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 03 : 12[1a0] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 03 : 13[1b0] -> 6[1c0] [send] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 03 : 2[180] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 03 : 5[1b0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 03 : 1[170] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 03 : 2[180] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 03 : 5[1b0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 03 : 13[1b0] -> 6[1c0] [receive] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 03 : 7[1d0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 03 : 7[1d0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 03 : 3[190] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 03 : 0[160] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 03 : 11[190] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 03 : 10[180] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 03 : 0[160] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 03 : 4[1a0] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 04 : 9[170] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 04 : 11[190] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 04 : 10[180] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 04 : 4[1a0] -> 8[160] [receive] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 04 : 4[1a0] -> 8[160] [send] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 04 : 10[180] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 04 : 8[160] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 04 : 0[160] -> 11[190] [receive] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 03 : 15[1d0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 03 : 14[1c0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 03 : 15[1d0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 04 : 15[1d0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 04 : 12[1a0] -> 0[160] [send] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 03 : 3[190] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 03 : 2[180] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 03 : 6[1c0] -> 13[1b0] [receive] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 03 : 13[1b0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 04 : 1[170] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 04 : 3[190] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 04 : 2[180] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 04 : 14[1c0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 04 : 12[1a0] -> 0[160] [receive] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 04 : 15[1d0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 04 : 11[190] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 03 : 6[1c0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 04 : 0[160] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 04 : 12[1a0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 04 : 2[180] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 05 : 8[160] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 05 : 12[1a0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 04 : 5[1b0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 04 : 3[190] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 04 : 7[1d0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 03 : 6[1c0] -> 13[1b0] [send] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 04 : 1[170] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 04 : 4[1a0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 04 : 0[160] -> 11[190] [send] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 05 : 3[190] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 04 : 13[1b0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 05 : 2[180] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 04 : 6[1c0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 04 : 9[170] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 04 : 14[1c0] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 04 : 13[1b0] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 04 : 5[1b0] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 04 : 7[1d0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 04 : 11[190] -> 0[160] [receive] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 04 : 6[1c0] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 05 : 10[180] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 05 : 15[1d0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 05 : 4[1a0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 05 : 14[1c0] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 05 : 7[1d0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 05 : 5[1b0] -> 9[170] [receive] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 05 : 13[1b0] -> 1[170] [receive] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 05 : 6[1c0] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 05 : 12[1a0] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 04 : 11[190] -> 0[160] [send] via NET/AWS Libfabric/0 tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 05 : 9[170] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 05 : 1[170] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 05 : 15[1d0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 05 : 13[1b0] -> 1[170] [send] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 05 : 5[1b0] -> 9[170] [send] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 05 : 14[1c0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 05 : 7[1d0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 05 : 13[1b0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 05 : 11[190] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 05 : 5[1b0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 05 : 6[1c0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 05 : 0[160] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 05 : 2[180] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 06 : 15[1d0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 05 : 1[170] -> 10[180] [send] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 06 : 13[1b0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 05 : 8[160] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 05 : 11[190] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 05 : 3[190] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 06 : 5[1b0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 05 : 4[1a0] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 05 : 1[170] -> 10[180] [receive] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 06 : 14[1c0] -> 2[180] [send] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 06 : 12[1a0] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 05 : 0[160] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 06 : 8[160] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 06 : 11[190] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 06 : 6[1c0] -> 10[180] [send] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 06 : 12[1a0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 06 : 7[1d0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 06 : 3[190] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 06 : 14[1c0] -> 2[180] [receive] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 06 : 4[1a0] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 06 : 14[1c0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 06 : 4[1a0] -> 15[1d0] [receive] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 06 : 0[160] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 06 : 2[180] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 06 : 14[1c0] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 06 : 3[190] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 06 : 2[180] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 06 : 4[1a0] -> 15[1d0] [send] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 05 : 10[180] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 06 : 9[170] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 06 : 8[160] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 05 : 10[180] -> 1[170] [receive] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 06 : 13[1b0] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 05 : 10[180] -> 1[170] [send] via NET/AWS Libfabric/1 tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 06 : 1[170] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 06 : 0[160] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 06 : 6[1c0] -> 10[180] [receive] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 06 : 10[180] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 06 : 5[1b0] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 06 : 3[190] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 06 : 6[1c0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 06 : 11[190] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 06 : 10[180] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 06 : 1[170] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 06 : 9[170] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 06 : 11[190] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 07 : 0[160] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 06 : 6[1c0] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 06 : 7[1d0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 07 : 15[1d0] -> 3[190] [receive] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 07 : 13[1b0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 07 : 10[180] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 07 : 8[160] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 07 : 3[190] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 07 : 2[180] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO Ring 07 : 9[170] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 07 : 7[1d0] -> 11[190] [receive] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 07 : 11[190] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO Ring 07 : 1[170] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 07 : 5[1b0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 07 : 8[160] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 07 : 6[1c0] -> 5[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 07 : 0[160] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 06 : 15[1d0] -> 4[1a0] [receive] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 07 : 2[180] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 06 : 4[1a0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 06 : 15[1d0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 07 : 12[1a0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 07 : 7[1d0] -> 11[190] [send] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 07 : 14[1c0] -> 13[1b0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 06 : 15[1d0] -> 4[1a0] [send] via NET/AWS Libfabric/2 tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 07 : 10[180] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO Ring 07 : 13[1b0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 07 : 4[1a0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO Ring 07 : 10[180] -> 9[170] via P2P/IPC tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO Ring 07 : 11[190] -> 10[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 07 : 7[1d0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:15:574 [1] NCCL INFO comm 0x14fb0c3d9f90 rank 9 nranks 16 cudaDev 1 busId 170 - Init COMPLETE tensorflow-benchmarks-efa-worker-1:16:581 [2] NCCL INFO comm 0x14b8043db700 rank 10 nranks 16 cudaDev 2 busId 180 - Init COMPLETE tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 07 : 5[1b0] -> 14[1c0] [receive] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 07 : 5[1b0] -> 14[1c0] [send] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO Ring 07 : 12[1a0] -> 8[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO Ring 07 : 8[160] -> 11[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 07 : 15[1d0] -> 3[190] [send] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:17:592 [3] NCCL INFO comm 0x15197c3cb0c0 rank 11 nranks 16 cudaDev 3 busId 190 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO Ring 07 : 7[1d0] -> 4[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO Ring 07 : 6[1c0] -> 7[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO Ring 07 : 4[1a0] -> 0[160] via P2P/IPC tensorflow-benchmarks-efa-worker-1:14:575 [0] NCCL INFO comm 0x14a3e0ec0630 rank 8 nranks 16 cudaDev 0 busId 160 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Ring 07 : 0[160] -> 3[190] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 07 : 15[1d0] -> 14[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO Ring 07 : 15[1d0] -> 12[1a0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:21:574 [7] NCCL INFO comm 0x14adf43dec40 rank 7 nranks 16 cudaDev 7 busId 1d0 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:18:576 [4] NCCL INFO comm 0x149da03921e0 rank 4 nranks 16 cudaDev 4 busId 1a0 - Init COMPLETE tensorflow-benchmarks-efa-worker-1:18:580 [4] NCCL INFO comm 0x1502c03d3f90 rank 12 nranks 16 cudaDev 4 busId 1a0 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO Ring 07 : 3[190] -> 2[180] via P2P/IPC tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO Ring 07 : 2[180] -> 1[170] via P2P/IPC tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO comm 0x152dd055ae30 rank 0 nranks 16 cudaDev 0 busId 160 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:17:584 [3] NCCL INFO comm 0x1535d83981e0 rank 3 nranks 16 cudaDev 3 busId 190 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:15:587 [1] NCCL INFO comm 0x14a4a839afb0 rank 1 nranks 16 cudaDev 1 busId 170 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:16:575 [2] NCCL INFO comm 0x146a90396360 rank 2 nranks 16 cudaDev 2 busId 180 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 07 : 14[1c0] -> 5[1b0] [receive] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO Ring 07 : 5[1b0] -> 6[1c0] via P2P/IPC tensorflow-benchmarks-efa-worker-0:20:585 [6] NCCL INFO comm 0x1464ac39ba20 rank 6 nranks 16 cudaDev 6 busId 1c0 - Init COMPLETE tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 07 : 14[1c0] -> 15[1d0] via P2P/IPC tensorflow-benchmarks-efa-worker-1:19:587 [5] NCCL INFO comm 0x14ce643d28a0 rank 13 nranks 16 cudaDev 5 busId 1b0 - Init COMPLETE tensorflow-benchmarks-efa-worker-1:21:593 [7] NCCL INFO comm 0x14845c3c9300 rank 15 nranks 16 cudaDev 7 busId 1d0 - Init COMPLETE tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO Ring 07 : 14[1c0] -> 5[1b0] [send] via NET/AWS Libfabric/3 tensorflow-benchmarks-efa-worker-1:20:586 [6] NCCL INFO comm 0x152840393a70 rank 14 nranks 16 cudaDev 6 busId 1c0 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:19:581 [5] NCCL INFO comm 0x14567c3965e0 rank 5 nranks 16 cudaDev 5 busId 1b0 - Init COMPLETE tensorflow-benchmarks-efa-worker-0:14:595 [0] NCCL INFO Launch mode Parallel Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss Done warm up Step Img/sec total_loss 1 images/sec: 194.6 +/- 0.0 (jitter = 0.0) 8.364 1 images/sec: 194.5 +/- 0.0 (jitter = 0.0) 8.427 1 images/sec: 194.4 +/- 0.0 (jitter = 0.0) 8.287 1 images/sec: 194.2 +/- 0.0 (jitter = 0.0) 8.287 1 images/sec: 194.0 +/- 0.0 (jitter = 0.0) 8.306 1 images/sec: 193.9 +/- 0.0 (jitter = 0.0) 8.121 1 images/sec: 194.0 +/- 0.0 (jitter = 0.0) 8.353 1 images/sec: 194.4 +/- 0.0 (jitter = 0.0) 8.365 1 images/sec: 194.3 +/- 0.0 (jitter = 0.0) 8.173 1 images/sec: 194.7 +/- 0.0 (jitter = 0.0) 8.064 1 images/sec: 194.0 +/- 0.0 (jitter = 0.0) 8.205 1 images/sec: 193.7 +/- 0.0 (jitter = 0.0) 8.354 1 images/sec: 194.6 +/- 0.0 (jitter = 0.0) 8.201 1 images/sec: 194.2 +/- 0.0 (jitter = 0.0) 8.195 1 images/sec: 194.2 +/- 0.0 (jitter = 0.0) 8.285 1 images/sec: 193.8 +/- 0.0 (jitter = 0.0) 8.241 10 images/sec: 194.8 +/- 0.2 (jitter = 1.0) 8.161 10 images/sec: 194.9 +/- 0.2 (jitter = 1.0) 8.133 10 images/sec: 194.8 +/- 0.2 (jitter = 0.7) 8.196 10 images/sec: 194.9 +/- 0.2 (jitter = 1.0) 8.193 10 images/sec: 194.9 +/- 0.2 (jitter = 0.9) 8.171 10 images/sec: 194.9 +/- 0.2 (jitter = 0.8) 8.179 10 images/sec: 194.9 +/- 0.3 (jitter = 0.9) 8.110 10 images/sec: 194.9 +/- 0.3 (jitter = 0.6) 8.222 10 images/sec: 194.9 +/- 0.2 (jitter = 0.7) 8.201 10 images/sec: 194.9 +/- 0.2 (jitter = 1.0) 8.179 10 images/sec: 194.8 +/- 0.3 (jitter = 1.1) 8.124 10 images/sec: 194.9 +/- 0.2 (jitter = 0.9) 8.029 10 images/sec: 194.9 +/- 0.2 (jitter = 0.9) 8.137 10 images/sec: 194.9 +/- 0.2 (jitter = 0.9) 8.190 10 images/sec: 194.9 +/- 0.2 (jitter = 0.8) 8.170 10 images/sec: 194.9 +/- 0.2 (jitter = 0.8) 8.157 20 images/sec: 195.6 +/- 0.2 (jitter = 1.5) 7.985 20 images/sec: 195.5 +/- 0.2 (jitter = 1.6) 8.135 20 images/sec: 195.5 +/- 0.2 (jitter = 1.4) 8.083 20 images/sec: 195.5 +/- 0.2 (jitter = 1.4) 8.090 20 images/sec: 195.5 +/- 0.2 (jitter = 1.5) 8.060 20 images/sec: 195.5 +/- 0.2 (jitter = 1.4) 8.041 20 images/sec: 195.5 +/- 0.3 (jitter = 1.6) 8.076 20 images/sec: 195.5 +/- 0.2 (jitter = 1.4) 8.072 20 images/sec: 195.5 +/- 0.2 (jitter = 1.4) 8.084 20 images/sec: 195.5 +/- 0.2 (jitter = 1.4) 8.016 20 images/sec: 195.5 +/- 0.3 (jitter = 1.7) 7.980 20 images/sec: 195.6 +/- 0.2 (jitter = 1.4) 8.053 20 images/sec: 195.5 +/- 0.2 (jitter = 1.4) 7.999 20 images/sec: 195.6 +/- 0.2 (jitter = 1.3) 8.121 20 images/sec: 195.5 +/- 0.2 (jitter = 1.3) 8.024 20 images/sec: 195.5 +/- 0.2 (jitter = 1.6) 8.096 30 images/sec: 196.3 +/- 0.4 (jitter = 2.1) 8.080 30 images/sec: 196.3 +/- 0.4 (jitter = 1.9) 7.995 30 images/sec: 196.3 +/- 0.4 (jitter = 2.2) 8.036 30 images/sec: 196.4 +/- 0.4 (jitter = 1.9) 8.026 30 images/sec: 196.3 +/- 0.4 (jitter = 1.7) 7.999 30 images/sec: 196.3 +/- 0.4 (jitter = 2.0) 8.012 30 images/sec: 196.4 +/- 0.4 (jitter = 1.9) 8.037 30 images/sec: 196.3 +/- 0.4 (jitter = 1.9) 8.075 30 images/sec: 196.3 +/- 0.4 (jitter = 2.0) 8.008 30 images/sec: 196.3 +/- 0.4 (jitter = 1.7) 8.098 30 images/sec: 196.3 +/- 0.4 (jitter = 1.8) 8.025 30 images/sec: 196.3 +/- 0.4 (jitter = 2.0) 8.030 30 images/sec: 196.3 +/- 0.4 (jitter = 2.5) 7.984 30 images/sec: 196.3 +/- 0.4 (jitter = 1.9) 7.982 30 images/sec: 196.3 +/- 0.4 (jitter = 1.9) 8.037 30 images/sec: 196.3 +/- 0.4 (jitter = 1.9) 8.031 40 images/sec: 197.1 +/- 0.4 (jitter = 2.7) 8.037 40 images/sec: 197.1 +/- 0.4 (jitter = 2.7) 7.975 40 images/sec: 197.1 +/- 0.4 (jitter = 2.8) 8.007 40 images/sec: 197.1 +/- 0.4 (jitter = 2.7) 7.947 40 images/sec: 197.1 +/- 0.4 (jitter = 2.7) 8.031 40 images/sec: 197.1 +/- 0.4 (jitter = 2.8) 8.006 40 images/sec: 197.1 +/- 0.4 (jitter = 2.7) 8.043 40 images/sec: 197.1 +/- 0.4 (jitter = 2.8) 8.010 40 images/sec: 197.1 +/- 0.4 (jitter = 2.7) 8.053 40 images/sec: 197.1 +/- 0.4 (jitter = 2.6) 8.056 40 images/sec: 197.1 +/- 0.4 (jitter = 3.1) 8.045 40 images/sec: 197.1 +/- 0.4 (jitter = 2.6) 8.008 40 images/sec: 197.1 +/- 0.4 (jitter = 2.8) 7.994 40 images/sec: 197.1 +/- 0.4 (jitter = 2.8) 8.031 40 images/sec: 197.1 +/- 0.4 (jitter = 2.8) 8.030 40 images/sec: 197.1 +/- 0.4 (jitter = 2.8) 8.005 50 images/sec: 197.5 +/- 0.3 (jitter = 2.9) 7.980 50 images/sec: 197.5 +/- 0.3 (jitter = 2.8) 8.041 50 images/sec: 197.5 +/- 0.3 (jitter = 3.0) 8.087 50 images/sec: 197.5 +/- 0.3 (jitter = 2.9) 8.097 50 images/sec: 197.5 +/- 0.3 (jitter = 3.0) 8.019 50 images/sec: 197.5 +/- 0.3 (jitter = 2.9) 8.001 50 images/sec: 197.5 +/- 0.3 (jitter = 2.7) 8.017 50 images/sec: 197.5 +/- 0.3 (jitter = 2.9) 7.998 50 images/sec: 197.5 +/- 0.3 (jitter = 3.1) 7.986 50 images/sec: 197.5 +/- 0.3 (jitter = 3.0) 8.050 50 images/sec: 197.5 +/- 0.3 (jitter = 2.8) 8.041 50 images/sec: 197.5 +/- 0.3 (jitter = 2.9) 7.931 50 images/sec: 197.5 +/- 0.3 (jitter = 2.8) 8.051 50 images/sec: 197.5 +/- 0.3 (jitter = 2.8) 8.043 50 images/sec: 197.5 +/- 0.3 (jitter = 2.8) 8.033 50 images/sec: 197.5 +/- 0.3 (jitter = 3.0) 8.045 60 images/sec: 197.5 +/- 0.3 (jitter = 2.3) 8.025 60 images/sec: 197.5 +/- 0.3 (jitter = 2.4) 8.001 60 images/sec: 197.5 +/- 0.3 (jitter = 2.5) 8.038 60 images/sec: 197.5 +/- 0.3 (jitter = 2.4) 7.976 60 images/sec: 197.5 +/- 0.3 (jitter = 2.5) 8.071 60 images/sec: 197.5 +/- 0.3 (jitter = 2.4) 8.023 60 images/sec: 197.5 +/- 0.3 (jitter = 2.5) 8.067 60 images/sec: 197.5 +/- 0.3 (jitter = 2.4) 7.938 60 images/sec: 197.5 +/- 0.3 (jitter = 2.5) 7.996 60 images/sec: 197.5 +/- 0.3 (jitter = 2.5) 8.014 60 images/sec: 197.5 +/- 0.3 (jitter = 2.4) 8.035 60 images/sec: 197.5 +/- 0.3 (jitter = 2.3) 8.115 60 images/sec: 197.5 +/- 0.3 (jitter = 2.4) 8.048 60 images/sec: 197.5 +/- 0.3 (jitter = 2.5) 7.966 60 images/sec: 197.5 +/- 0.3 (jitter = 2.4) 8.033 60 images/sec: 197.5 +/- 0.3 (jitter = 2.7) 8.060 70 images/sec: 197.5 +/- 0.3 (jitter = 2.2) 8.048 70 images/sec: 197.5 +/- 0.3 (jitter = 2.2) 8.042 70 images/sec: 197.5 +/- 0.3 (jitter = 2.1) 7.944 70 images/sec: 197.5 +/- 0.3 (jitter = 2.3) 8.089 70 images/sec: 197.5 +/- 0.3 (jitter = 2.1) 8.067 70 images/sec: 197.5 +/- 0.3 (jitter = 2.0) 8.132 70 images/sec: 197.5 +/- 0.3 (jitter = 2.1) 8.069 70 images/sec: 197.5 +/- 0.3 (jitter = 2.1) 8.015 70 images/sec: 197.5 +/- 0.3 (jitter = 2.2) 7.969 70 images/sec: 197.5 +/- 0.3 (jitter = 2.4) 8.067 70 images/sec: 197.5 +/- 0.3 (jitter = 2.1) 8.057 70 images/sec: 197.5 +/- 0.3 (jitter = 2.1) 8.034 70 images/sec: 197.5 +/- 0.3 (jitter = 2.1) 8.075 70 images/sec: 197.5 +/- 0.3 (jitter = 2.0) 8.027 70 images/sec: 197.5 +/- 0.3 (jitter = 2.0) 7.961 70 images/sec: 197.5 +/- 0.3 (jitter = 2.2) 8.041 80 images/sec: 197.4 +/- 0.2 (jitter = 2.1) 7.960 80 images/sec: 197.4 +/- 0.2 (jitter = 2.0) 8.137 80 images/sec: 197.4 +/- 0.2 (jitter = 2.0) 7.980 80 images/sec: 197.4 +/- 0.2 (jitter = 1.9) 8.028 80 images/sec: 197.4 +/- 0.2 (jitter = 1.9) 8.024 80 images/sec: 197.4 +/- 0.2 (jitter = 1.9) 7.973 80 images/sec: 197.4 +/- 0.2 (jitter = 2.1) 7.996 80 images/sec: 197.4 +/- 0.2 (jitter = 1.9) 8.032 80 images/sec: 197.4 +/- 0.2 (jitter = 2.0) 8.159 80 images/sec: 197.4 +/- 0.2 (jitter = 1.9) 8.001 80 images/sec: 197.4 +/- 0.2 (jitter = 1.9) 8.105 80 images/sec: 197.4 +/- 0.2 (jitter = 2.1) 8.052 80 images/sec: 197.4 +/- 0.2 (jitter = 1.9) 8.066 80 images/sec: 197.4 +/- 0.2 (jitter = 2.2) 8.030 80 images/sec: 197.4 +/- 0.2 (jitter = 2.0) 8.073 80 images/sec: 197.4 +/- 0.2 (jitter = 2.0) 8.073 90 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.098 90 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.043 90 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.012 90 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.075 90 images/sec: 197.5 +/- 0.2 (jitter = 2.1) 8.069 90 images/sec: 197.5 +/- 0.2 (jitter = 1.8) 8.132 90 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 7.995 90 images/sec: 197.5 +/- 0.2 (jitter = 1.8) 8.118 90 images/sec: 197.5 +/- 0.2 (jitter = 2.0) 8.013 90 images/sec: 197.5 +/- 0.2 (jitter = 2.0) 8.092 90 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 7.987 90 images/sec: 197.5 +/- 0.2 (jitter = 2.0) 7.998 90 images/sec: 197.5 +/- 0.2 (jitter = 1.8) 8.114 90 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 7.995 90 images/sec: 197.5 +/- 0.2 (jitter = 2.0) 8.099 90 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.072 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.108 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.035 ---------------------------------------------------------------- total images/sec: 3158.68 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.8) 8.085 ---------------------------------------------------------------- total images/sec: 3158.62 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.8) 8.000 ---------------------------------------------------------------- total images/sec: 3158.64 ---------------------------------------------------------------- ---------------------------------------------------------------- total images/sec: 3158.67 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.037 ---------------------------------------------------------------- total images/sec: 3158.61 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.074 ---------------------------------------------------------------- total images/sec: 3158.62 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 7.977 ---------------------------------------------------------------- total images/sec: 3158.60 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 7.953 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 7.980 ---------------------------------------------------------------- total images/sec: 3158.67 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 2.0) 8.044 ---------------------------------------------------------------- total images/sec: 3158.67 ---------------------------------------------------------------- ---------------------------------------------------------------- total images/sec: 3158.67 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 7.952 100 images/sec: 197.5 +/- 0.2 (jitter = 2.0) 8.051 ---------------------------------------------------------------- total images/sec: 3158.72 ---------------------------------------------------------------- ---------------------------------------------------------------- total images/sec: 3158.57 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.034 ---------------------------------------------------------------- total images/sec: 3158.66 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.9) 8.040 ---------------------------------------------------------------- total images/sec: 3158.74 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 1.8) 8.014 ---------------------------------------------------------------- total images/sec: 3158.65 ---------------------------------------------------------------- 100 images/sec: 197.5 +/- 0.2 (jitter = 2.0) 8.111 ---------------------------------------------------------------- total images/sec: 3158.63 ----------------------------------------------------------------