==================== Test output for //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_ft_test (shard 17 of 17): 2024-03-29 06:59:10.291780: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.12.0: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/python_aarch64-unknown-linux-gnu/bin/python3 [ RUN ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_graph_tfapiversion_1_numsources_1_numworkers_1 [ SKIPPED ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_graph_tfapiversion_1_numsources_1_numworkers_1 [ RUN ] SnapshotFtTest.testNestedDataset_test_mode_eager_tfapiversion_1_numworkers_3 [ SKIPPED ] SnapshotFtTest.testNestedDataset_test_mode_eager_tfapiversion_1_numworkers_3 [ RUN ] SnapshotFtTest.testNonrepeatedDatasetDoesntProduceSecondRepetitionDir_test_mode_graph_tfapiversion_1_numsources_3_numworkers_1 [ SKIPPED ] SnapshotFtTest.testNonrepeatedDatasetDoesntProduceSecondRepetitionDir_test_mode_graph_tfapiversion_1_numsources_3_numworkers_1 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_1_numelements_2_numrepetitions_1_numworkers_3 [ SKIPPED ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_1_numelements_2_numrepetitions_1_numworkers_3 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numelements_1_numrepetitions_10_numworkers_1 [ SKIPPED ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numelements_1_numrepetitions_10_numworkers_1 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_2_numelements_2_numrepetitions_10_numworkers_3 2024-03-29 07:00:11.227828: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpqhne5tsw/tf_data_dispatcher_journal 2024-03-29 07:00:11.227926: I tensorflow/core/data/service/dispatcher_impl.cc:242] No journal found. Starting dispatcher from new state. 2024-03-29 07:00:11.228830: I tensorflow/core/data/service/dispatcher_impl.cc:271] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpqhne5tsw" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-03-29 07:00:11.228862: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:41165 2024-03-29 07:00:11.235228: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-03-29 07:00:11.475834: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41165. Worker config: protocol: "grpc" dispatcher_address: "localhost:41165" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-03-29 07:00:11.476145: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:46509 2024-03-29 07:00:11.478622: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41165. Worker config: protocol: "grpc" dispatcher_address: "localhost:41165" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-03-29 07:00:11.478845: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:35539 2024-03-29 07:00:11.480932: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41165. Worker config: protocol: "grpc" dispatcher_address: "localhost:41165" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-03-29 07:00:11.481131: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41703 WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.12/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. W0329 07:00:11.858525 281472998143024 deprecation.py:50] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.12/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. 2024-03-29 07:00:12.113014: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled 2024-03-29 07:00:12.255051: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1711695612.337195 1883996 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot I0000 00:00:1711695612.537203 1883996 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot I0000 00:00:1711695612.538131 1885101 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, created stream_0 and assigned to localhost:35539 I0000 00:00:1711695612.539491 1889549 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, created stream_1 and assigned to localhost:41703 2024-03-29 07:00:12.539779: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 46509 2024-03-29 07:00:12.606868: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-03-29 07:00:12.615620: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream 1, chunk 0. 2024-03-29 07:00:12.775266: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-03-29 07:00:12.775751: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream 0, chunk 0. I0000 00:00:1711695612.780726 1889611 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, created stream_2 and assigned to localhost:46509 2024-03-29 07:00:12.815507: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41165 2024-03-29 07:00:12.815792: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-03-29 07:00:12.816121: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-03-29 07:00:12.816354: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpqhne5tsw/tf_data_dispatcher_journal 2024-03-29 07:00:12.816592: I tensorflow/core/data/service/dispatcher_impl.cc:252] Restored from journal in 95us. 2024-03-29 07:00:12.937219: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-03-29 07:00:12.937685: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream 2, chunk 0. I0000 00:00:1711695613.017469 1890758 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot 2024-03-29 07:00:13.017725: I tensorflow/core/data/service/dispatcher_impl.cc:271] Started tf.data service dispatcher with config port: 41165 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpqhne5tsw" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-03-29 07:00:13.017808: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:41165 2024-03-29 07:00:13.025385: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695613.027309 1890269 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8c149abc70a75883_ldcg-aarch64-02-ffecb5b9-1735997-614c730f666ac.tfrecord*. I0000 00:00:1711695613.029065 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. 2024-03-29 07:00:13.030614: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. I0000 00:00:1711695613.157326 1890754 snapshot_manager.cc:775] Starting repetition_1 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, source 0 I0000 00:00:1711695613.186053 1891240 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1711695613.195222 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1711695613.196107 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1711695613.208594 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1711695613.215331 1891685 snapshot_manager.cc:775] Starting repetition_2 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, source 0 I0000 00:00:1711695613.646042 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1711695613.655390 1891240 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1711695613.668325 1892966 snapshot_manager.cc:775] Starting repetition_3 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, source 0 I0000 00:00:1711695613.671781 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1711695613.673114 1891240 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1711695613.674378 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1711695613.674568 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1711695613.674919 1891240 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1711695613.777304 1894065 snapshot_manager.cc:775] Starting repetition_4 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, source 0 I0000 00:00:1711695613.779516 1891240 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1711695613.780114 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1711695613.780304 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1711695613.808412 1894065 snapshot_manager.cc:775] Starting repetition_5 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, source 0 I0000 00:00:1711695613.810920 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1711695613.814561 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1711695613.887133 1894697 snapshot_manager.cc:775] Starting repetition_6 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, source 0 I0000 00:00:1711695613.908500 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. I0000 00:00:1711695613.915596 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. 2024-03-29 07:00:13.929711: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41165. Worker config: port: 46509 protocol: "grpc" dispatcher_address: "localhost:41165" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-03-29 07:00:13.929917: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:46509 2024-03-29 07:00:13.945454: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-03-29 07:00:13.945898: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream 2, chunk 0. I0000 00:00:1711695613.987022 1894823 snapshot_manager.cc:775] Starting repetition_7 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, source 0 2024-03-29 07:00:14.035103: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695614.116188 1890270 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c2f56a81756b2a52_ldcg-aarch64-02-c1b28a2-1735997-614c730f667da.tfrecord*. I0000 00:00:1711695614.116602 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. 2024-03-29 07:00:14.132418: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. I0000 00:00:1711695614.136974 1894823 snapshot_manager.cc:775] Starting repetition_8 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, source 0 I0000 00:00:1711695614.145173 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1711695614.145335 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1711695614.145374 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1711695614.145405 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1711695614.145433 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1711695614.147813 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1711695614.148129 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. I0000 00:00:1711695614.148400 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1711695614.148668 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. 2024-03-29 07:00:14.175166: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-03-29 07:00:14.175726: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream 0, chunk 0. I0000 00:00:1711695614.177301 1895135 snapshot_manager.cc:775] Starting repetition_9 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, source 0 I0000 00:00:1711695614.179367 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. I0000 00:00:1711695614.210487 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1711695614.216298 1895258 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. I0000 00:00:1711695614.217426 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1711695614.217508 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1711695614.217542 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1711695614.217790 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1711695614.217911 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1711695614.217952 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1711695614.218144 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. I0000 00:00:1711695614.218569 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1711695614.218854 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. I0000 00:00:1711695614.219144 1896191 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. 2024-03-29 07:00:14.219741: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 4, chunk size: 56B. 2024-03-29 07:00:14.220429: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_2_4. Checkpointing distributed tf.data snapshot writer took 643us I0000 00:00:1711695614.220479 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. 2024-03-29 07:00:14.220831: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot/streams/stream_0/checkpoints 2024-03-29 07:00:14.221143: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 0, compression: SNAPPY } I0000 00:00:1711695614.222049 1890693 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. I0000 00:00:1711695614.235442 1890272 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. 2024-03-29 07:00:14.256628: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 2, compression: SNAPPY }. Stream 2, chunk 0, number of elements in chunk: 4, chunk size: 56B. 2024-03-29 07:00:14.257194: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot/streams/stream_2/checkpoints/checkpoint_2_4. Checkpointing distributed tf.data snapshot writer took 494us 2024-03-29 07:00:14.257585: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot/streams/stream_2/checkpoints 2024-03-29 07:00:14.257874: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-03-29 07:00:14.258995: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 1, compression: SNAPPY }. Stream 1, chunk 0, number of elements in chunk: 12, chunk size: 168B. 2024-03-29 07:00:14.259414: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot/streams/stream_1/checkpoints/checkpoint_2_12. Checkpointing distributed tf.data snapshot writer took 388us 2024-03-29 07:00:14.259753: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot/streams/stream_1/checkpoints 2024-03-29 07:00:14.260033: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-03-29 07:00:14.336715: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 35539 2024-03-29 07:00:14.377337: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41165 2024-03-29 07:00:14.378109: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpqhne5tsw/tf_data_dispatcher_journal 2024-03-29 07:00:14.378366: I tensorflow/core/data/service/dispatcher_impl.cc:252] Restored from journal in 145us. 2024-03-29 07:00:14.385598: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-03-29 07:00:14.395743: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1711695614.684037 1897042 snapshot_manager.cc:372] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot I0000 00:00:1711695614.684134 1897042 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot 2024-03-29 07:00:14.695112: I tensorflow/core/data/service/dispatcher_impl.cc:271] Started tf.data service dispatcher with config port: 41165 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpqhne5tsw" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-03-29 07:00:14.695255: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:41165 2024-03-29 07:00:14.695394: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:14.699974: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41165. Worker config: port: 35539 protocol: "grpc" dispatcher_address: "localhost:41165" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-03-29 07:00:14.700164: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:35539 2024-03-29 07:00:14.700485: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 41703 2024-03-29 07:00:14.796475: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41165 2024-03-29 07:00:14.797267: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpqhne5tsw/tf_data_dispatcher_journal 2024-03-29 07:00:14.797582: I tensorflow/core/data/service/dispatcher_impl.cc:252] Restored from journal in 194us. I0000 00:00:1711695614.805107 1898340 snapshot_manager.cc:258] Recovered finished tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpx07_n422/tmpof946lud/tf_data_snapshot 2024-03-29 07:00:14.805680: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-03-29 07:00:14.815139: I tensorflow/core/data/service/dispatcher_impl.cc:271] Started tf.data service dispatcher with config port: 41165 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpqhne5tsw" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-03-29 07:00:14.815277: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:41165 2024-03-29 07:00:14.825628: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-03-29 07:00:14.826201: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41165. Worker config: port: 41703 protocol: "grpc" dispatcher_address: "localhost:41165" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-03-29 07:00:14.826346: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41703 2024-03-29 07:00:14.835058: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:15.845048: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:15.918544: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence [[{{node IteratorGetNext}}]] 2024-03-29 07:00:15.920787: W tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence [[{{node IteratorGetNext}}]] 2024-03-29 07:00:15.922239: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 41703 2024-03-29 07:00:15.923130: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 35539 2024-03-29 07:00:15.923880: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 46509 2024-03-29 07:00:15.924812: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41165 [ OK ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_2_numelements_2_numrepetitions_10_numworkers_3 [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSourceName_test_mode_graph_tfapiversion_2_badsourcedirname_source1 2024-03-29 07:00:19.276625: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpv7kl1vn7/tf_data_dispatcher_journal 2024-03-29 07:00:19.276737: I tensorflow/core/data/service/dispatcher_impl.cc:242] No journal found. Starting dispatcher from new state. 2024-03-29 07:00:19.277032: I tensorflow/core/data/service/dispatcher_impl.cc:271] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpv7kl1vn7" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-03-29 07:00:19.277059: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42621 2024-03-29 07:00:19.289052: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive I0000 00:00:1711695619.366263 1911175 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpkdkvvh9b/tmpw418vpwa/tf_data_snapshot I0000 00:00:1711695619.507623 1911175 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpkdkvvh9b/tmpw418vpwa/tf_data_snapshot 2024-03-29 07:00:19.685885: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42621 2024-03-29 07:00:19.686658: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpv7kl1vn7/tf_data_dispatcher_journal 2024-03-29 07:00:19.686838: I tensorflow/core/data/service/dispatcher_impl.cc:252] Restored from journal in 45us. 2024-03-29 07:00:19.886632: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-03-29 07:00:19.887830: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42621 [ OK ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSourceName_test_mode_graph_tfapiversion_2_badsourcedirname_source1 [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSplitNames_test_mode_graph_tfapiversion_2_badsplitfilename_split 2024-03-29 07:00:19.893059: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpjhaj5dwx/tf_data_dispatcher_journal 2024-03-29 07:00:19.893141: I tensorflow/core/data/service/dispatcher_impl.cc:242] No journal found. Starting dispatcher from new state. 2024-03-29 07:00:19.893422: I tensorflow/core/data/service/dispatcher_impl.cc:271] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpjhaj5dwx" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-03-29 07:00:19.893446: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:36129 2024-03-29 07:00:19.905119: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive I0000 00:00:1711695619.956183 1913119 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmp_2qwzi55/tmp6vl2hqeu/tf_data_snapshot I0000 00:00:1711695621.367284 1913119 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmp_2qwzi55/tmp6vl2hqeu/tf_data_snapshot 2024-03-29 07:00:21.367735: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-03-29 07:00:21.956674: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 36129 2024-03-29 07:00:21.957391: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpjhaj5dwx/tf_data_dispatcher_journal 2024-03-29 07:00:21.957560: I tensorflow/core/data/service/dispatcher_impl.cc:252] Restored from journal in 38us. 2024-03-29 07:00:22.166349: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 36129 [ OK ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSplitNames_test_mode_graph_tfapiversion_2_badsplitfilename_split [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithDuplicateGlobalIndexInSplitName_test_mode_eager_tfapiversion_1 [ SKIPPED ] SnapshotFtTest.testSnapshotRecoveryFailsWithDuplicateGlobalIndexInSplitName_test_mode_eager_tfapiversion_1 [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithOutOfOrderSplitName_test_mode_eager_tfapiversion_2 2024-03-29 07:00:24.331071: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpti6lq02v/tf_data_dispatcher_journal 2024-03-29 07:00:24.331154: I tensorflow/core/data/service/dispatcher_impl.cc:242] No journal found. Starting dispatcher from new state. 2024-03-29 07:00:24.331442: I tensorflow/core/data/service/dispatcher_impl.cc:271] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpti6lq02v" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-03-29 07:00:24.331466: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:41087 2024-03-29 07:00:24.385246: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive I0000 00:00:1711695624.396702 1926697 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpe5hacnal/tmp57g37zyg/tf_data_snapshot I0000 00:00:1711695624.571317 1926697 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpe5hacnal/tmp57g37zyg/tf_data_snapshot 2024-03-29 07:00:24.885949: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41087 2024-03-29 07:00:24.886759: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpti6lq02v/tf_data_dispatcher_journal 2024-03-29 07:00:24.886924: I tensorflow/core/data/service/dispatcher_impl.cc:252] Restored from journal in 35us. 2024-03-29 07:00:24.987171: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41087 [ OK ] SnapshotFtTest.testSnapshotRecoveryFailsWithOutOfOrderSplitName_test_mode_eager_tfapiversion_2 [ RUN ] SnapshotFtTest.testWorkersDontExceedMaxStreamAssignments_test_mode_graph_tfapiversion_2_workermaxconcurrentsnapshots_1 2024-03-29 07:00:24.991258: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpv3te3kx2/tf_data_dispatcher_journal 2024-03-29 07:00:24.991335: I tensorflow/core/data/service/dispatcher_impl.cc:242] No journal found. Starting dispatcher from new state. 2024-03-29 07:00:24.991608: I tensorflow/core/data/service/dispatcher_impl.cc:271] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpv3te3kx2" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 1 2024-03-29 07:00:24.991640: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45527 2024-03-29 07:00:24.994715: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45527. Worker config: protocol: "grpc" dispatcher_address: "localhost:45527" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-03-29 07:00:24.994927: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:36915 2024-03-29 07:00:24.997120: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45527. Worker config: protocol: "grpc" dispatcher_address: "localhost:45527" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-03-29 07:00:24.997315: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:45497 2024-03-29 07:00:25.030648: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695625.042589 1928384 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0 I0000 00:00:1711695625.207581 1928384 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0 I0000 00:00:1711695625.296021 1928922 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1 I0000 00:00:1711695625.404880 1928920 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, created stream_0 and assigned to localhost:36915 2024-03-29 07:00:25.455263: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, stream: 0, compression: SNAPPY } 2024-03-29 07:00:25.455796: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, stream 0, chunk 0. I0000 00:00:1711695625.477796 1928922 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1 I0000 00:00:1711695625.502687 1929649 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__5e028826cbf9a3cf_ldcg-aarch64-02-870f8eb4-1735997-614c731b82a47.tfrecord*. I0000 00:00:1711695625.513120 1928922 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2 I0000 00:00:1711695625.819607 1928428 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, created stream_1 and assigned to localhost:45497 2024-03-29 07:00:26.129473: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, stream: 1, compression: SNAPPY } 2024-03-29 07:00:26.300799: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, stream 1, chunk 0. I0000 00:00:1711695626.508328 1928922 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2 2024-03-29 07:00:26.510065: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695626.510629 1929650 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b5008aa6c23bc741_ldcg-aarch64-02-6233e1e8-1735997-614c731b8541b.tfrecord*. I0000 00:00:1711695626.576537 1931776 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3 I0000 00:00:1711695626.688305 1931776 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3 I0000 00:00:1711695627.129845 1932992 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4 I0000 00:00:1711695627.580859 1932992 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4 2024-03-29 07:00:27.581456: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:27.581510: I tensorflow/core/data/service/dispatcher_impl.cc:1491] Lost worker localhost:45497 due to timeout 2024-03-29 07:00:27.581528: I tensorflow/core/data/service/dispatcher_impl.cc:1491] Lost worker localhost:36915 due to timeout I0000 00:00:1711695627.595198 1931604 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1416f76424ae78a0_ldcg-aarch64-02-908d4a28-1735997-614c731c5252e.tfrecord*. I0000 00:00:1711695627.609974 1932992 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5 I0000 00:00:1711695627.757686 1932992 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5 I0000 00:00:1711695627.828815 1932992 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6 I0000 00:00:1711695628.197239 1932992 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6 I0000 00:00:1711695628.285792 1936434 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7 I0000 00:00:1711695628.630230 1936434 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7 2024-03-29 07:00:28.665071: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695628.669052 1931604 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1416f76424ae78a0_ldcg-aarch64-02-908d4a28-1735997-614c731c5252e.tfrecord*. I0000 00:00:1711695628.780515 1936434 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8 I0000 00:00:1711695628.897141 1936434 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8 I0000 00:00:1711695628.934432 1938989 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9 I0000 00:00:1711695628.998223 1938989 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9 2024-03-29 07:00:29.467277: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45527 2024-03-29 07:00:29.468085: I tensorflow/core/data/service/dispatcher_impl.cc:235] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpv3te3kx2/tf_data_dispatcher_journal 2024-03-29 07:00:29.468297: I tensorflow/core/data/service/dispatcher_impl.cc:252] Restored from journal in 84us. 2024-03-29 07:00:29.566368: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-03-29 07:00:29.566716: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1711695629.794661 1940733 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1 I0000 00:00:1711695629.967933 1940740 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7 I0000 00:00:1711695629.981386 1940742 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3 I0000 00:00:1711695629.988600 1940744 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2 I0000 00:00:1711695629.990527 1940737 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9 I0000 00:00:1711695630.002904 1940739 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4 I0000 00:00:1711695630.019517 1940732 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0 I0000 00:00:1711695630.044455 1940735 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8 I0000 00:00:1711695630.096520 1940743 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6 I0000 00:00:1711695630.106636 1940741 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5 2024-03-29 07:00:30.155272: I tensorflow/core/data/service/dispatcher_impl.cc:271] Started tf.data service dispatcher with config port: 45527 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpv3te3kx2" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 1 2024-03-29 07:00:30.155417: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45527 2024-03-29 07:00:30.158006: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695630.158795 1929649 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__5e028826cbf9a3cf_ldcg-aarch64-02-870f8eb4-1735997-614c731b82a47.tfrecord*. 2024-03-29 07:00:31.158157: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695631.159141 1929649 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__5e028826cbf9a3cf_ldcg-aarch64-02-870f8eb4-1735997-614c731b82a47.tfrecord*. I0000 00:00:1711695632.159873 1931604 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1416f76424ae78a0_ldcg-aarch64-02-908d4a28-1735997-614c731c5252e.tfrecord*. 2024-03-29 07:00:32.165069: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695633.166229 1931604 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1416f76424ae78a0_ldcg-aarch64-02-908d4a28-1735997-614c731c5252e.tfrecord*. 2024-03-29 07:00:33.175140: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695634.166696 1929649 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__5e028826cbf9a3cf_ldcg-aarch64-02-870f8eb4-1735997-614c731b82a47.tfrecord*. 2024-03-29 07:00:34.195073: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:35.035078: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2632, chunk size: 35.9844KB. 2024-03-29 07:00:35.035574: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_0/checkpoints/checkpoint_4_2632. Checkpointing distributed tf.data snapshot writer took 442us 2024-03-29 07:00:35.036112: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_0/checkpoints 2024-03-29 07:00:35.036377: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, stream: 0, compression: SNAPPY } 2024-03-29 07:00:35.042388: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, stream: 1, compression: SNAPPY }. Stream 1, chunk 0, number of elements in chunk: 2368, chunk size: 32.375KB. 2024-03-29 07:00:35.144504: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_1/checkpoints/checkpoint_4_2368. Checkpointing distributed tf.data snapshot writer took 102.059ms 2024-03-29 07:00:35.196076: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:35.196128: I tensorflow/core/data/service/dispatcher_impl.cc:1491] Lost worker localhost:36915 due to timeout 2024-03-29 07:00:35.361426: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0/streams/stream_1/checkpoints I0000 00:00:1711695635.639479 1955248 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5, created stream_0 and assigned to localhost:36915 2024-03-29 07:00:35.749452: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5, stream: 0, compression: SNAPPY } 2024-03-29 07:00:35.749917: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5, stream 0, chunk 0. 2024-03-29 07:00:35.869643: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0, stream: 1, compression: SNAPPY } I0000 00:00:1711695636.165331 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:00:36.198003: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:36.198046: I tensorflow/core/data/service/dispatcher_impl.cc:1491] Lost worker localhost:45497 due to timeout I0000 00:00:1711695636.285989 1959361 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_0 I0000 00:00:1711695636.396246 1960249 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9, created stream_0 and assigned to localhost:45497 2024-03-29 07:00:36.463603: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9, stream: 0, compression: SNAPPY } 2024-03-29 07:00:36.464238: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9, stream 0, chunk 0. 2024-03-29 07:00:37.215064: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:38.245133: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695638.985040 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:00:39.245335: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695640.146151 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:00:40.285070: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:41.295052: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:42.296282: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:43.305058: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695643.305355 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:00:44.305314: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695644.605581 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:00:45.325064: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695645.846919 1961154 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9cbef12251f3217c_ldcg-aarch64-02-107fabfb-1735997-614c7326007e8.tfrecord*. 2024-03-29 07:00:46.335061: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:47.345055: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695647.987621 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:00:48.355057: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:49.375055: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695649.626141 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:00:50.385168: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:51.405054: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695652.026166 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:00:52.410571: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695653.275032 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:00:53.415896: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:54.433613: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695654.856231 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:00:55.433808: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:00:56.434005: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695656.752406 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:00:57.465052: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695657.936618 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:00:58.475047: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695659.035531 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:00:59.478275: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695660.175155 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:00.485058: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695661.335598 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:01.495168: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:02.515067: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695663.156322 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:03.515373: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:04.535107: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695665.355036 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:05.555043: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695666.448671 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:06.555396: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:07.575046: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695667.735521 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:08.615066: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695668.855561 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:09.635049: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695670.136723 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:10.645059: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:11.645393: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695671.995156 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bfe7394793e48787_ldcg-aarch64-02-39dfad05-1735997-614c7326006ac.tfrecord*. 2024-03-29 07:01:12.646074: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695673.173478 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bfe7394793e48787_ldcg-aarch64-02-39dfad05-1735997-614c7326006ac.tfrecord*. 2024-03-29 07:01:13.655077: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695674.395153 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:14.663528: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:15.664574: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695675.706348 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:16.675050: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695677.077646 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:17.692960: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695678.376924 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:18.695048: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695679.646154 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:19.705042: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:20.705218: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695681.235552 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:21.715037: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:22.715231: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695682.793495 1961154 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9cbef12251f3217c_ldcg-aarch64-02-107fabfb-1735997-614c7326007e8.tfrecord*. 2024-03-29 07:01:23.735106: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:24.745054: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695685.157122 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:25.745485: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695686.246748 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:26.747138: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:27.755070: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:27.755175: I tensorflow/core/data/service/dispatcher_impl.cc:1491] Lost worker localhost:45497 due to timeout 2024-03-29 07:01:27.755193: I tensorflow/core/data/service/dispatcher_impl.cc:1491] Lost worker localhost:36915 due to timeout I0000 00:00:1711695687.915934 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:28.765038: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695689.247191 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. I0000 00:00:1711695689.525502 2103796 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5]: 0/1 streams completed; 245/5000 splits assigned or completed. I0000 00:00:1711695689.525731 2103796 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9]: 0/1 streams completed; 427/5000 splits assigned or completed. 2024-03-29 07:01:29.765386: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:30.785065: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695691.416007 1961154 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9cbef12251f3217c_ldcg-aarch64-02-107fabfb-1735997-614c7326007e8.tfrecord*. 2024-03-29 07:01:31.805050: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:32.805808: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695692.815597 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bfe7394793e48787_ldcg-aarch64-02-39dfad05-1735997-614c7326006ac.tfrecord*. 2024-03-29 07:01:33.815046: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695693.987843 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bfe7394793e48787_ldcg-aarch64-02-39dfad05-1735997-614c7326006ac.tfrecord*. 2024-03-29 07:01:34.825059: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695695.091022 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:35.835045: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695696.585088 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:36.845065: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695697.656157 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:37.865064: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:38.865341: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695699.065997 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:39.875053: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695700.676785 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:40.895365: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:41.905061: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:42.915086: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:43.935049: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:44.975053: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695705.336562 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:45.995034: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:45.995112: I tensorflow/core/data/service/dispatcher_impl.cc:1491] Lost worker localhost:36915 due to timeout 2024-03-29 07:01:45.995131: I tensorflow/core/data/service/dispatcher_impl.cc:1491] Lost worker localhost:45497 due to timeout I0000 00:00:1711695706.437496 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:47.005145: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:48.005325: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:49.045053: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:50.055041: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695710.836826 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:51.065072: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:52.085924: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695712.155045 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:53.105044: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:54.115048: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:55.115295: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695715.875051 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:56.115464: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:01:57.135060: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695717.345783 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:01:58.145075: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695718.565571 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:01:59.155048: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695719.995053 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:00.165053: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:01.165342: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:02.175087: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:03.185063: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695723.435885 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:04.193235: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:05.195044: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695725.426130 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:06.198463: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:07.205047: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695727.775622 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:08.215026: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695728.965404 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:09.225053: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:10.245057: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695730.848170 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:11.265059: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695732.177826 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bfe7394793e48787_ldcg-aarch64-02-39dfad05-1735997-614c7326006ac.tfrecord*. 2024-03-29 07:02:12.270387: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:13.275049: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695733.575929 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:14.295066: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:15.295263: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:16.325061: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:17.344238: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695737.519602 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:18.345096: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:19.350071: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:20.365102: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695741.135028 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:21.375151: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:22.375773: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695742.892393 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:23.385097: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695743.908411 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bfe7394793e48787_ldcg-aarch64-02-39dfad05-1735997-614c7326006ac.tfrecord*. 2024-03-29 07:02:24.385497: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:25.425053: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695745.979714 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:26.435123: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695747.081228 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:27.445505: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695748.448723 1961154 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9cbef12251f3217c_ldcg-aarch64-02-107fabfb-1735997-614c7326007e8.tfrecord*. 2024-03-29 07:02:28.465049: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:29.475076: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695749.605714 2244188 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5]: 0/1 streams completed; 615/5000 splits assigned or completed. I0000 00:00:1711695749.605975 2244308 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9]: 0/1 streams completed; 1260/5000 splits assigned or completed. I0000 00:00:1711695749.755084 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:30.485098: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:31.495074: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:32.497999: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:33.505044: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695753.532856 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bfe7394793e48787_ldcg-aarch64-02-39dfad05-1735997-614c7326006ac.tfrecord*. 2024-03-29 07:02:34.505506: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695754.577717 1961154 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9cbef12251f3217c_ldcg-aarch64-02-107fabfb-1735997-614c7326007e8.tfrecord*. I0000 00:00:1711695755.578191 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:35.645052: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:36.655050: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695756.698253 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bfe7394793e48787_ldcg-aarch64-02-39dfad05-1735997-614c7326006ac.tfrecord*. 2024-03-29 07:02:37.695102: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695757.971120 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:38.705058: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695759.046636 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:39.705282: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695760.565338 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:40.715045: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:41.715382: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695762.596293 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:42.725343: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:43.735066: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695764.245060 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:44.755048: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:45.795649: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695765.868079 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:46.805068: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695767.155864 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:47.825046: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695768.605052 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:48.837415: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:49.845111: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695770.066428 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:50.855093: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:51.868821: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:52.875046: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695773.715772 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:02:53.875810: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:54.885077: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:55.885259: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695776.520512 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:56.895050: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695777.625052 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:57.905051: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695778.846200 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:02:58.915058: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:02:59.925059: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695780.516018 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:00.935055: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695781.715476 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:03:01.955046: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:02.955239: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695783.897103 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:03.955449: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:04.956333: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695785.656972 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:05.975058: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:06.985076: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695787.125523 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:03:07.985253: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:08.995063: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695789.455037 1961154 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9cbef12251f3217c_ldcg-aarch64-02-107fabfb-1735997-614c7326007e8.tfrecord*. 2024-03-29 07:03:10.005055: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:11.016695: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695791.043945 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:12.025097: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695792.576596 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:13.035082: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695793.612334 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:14.045166: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695794.985135 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:15.055053: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:16.061442: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:17.066408: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695797.575807 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:03:18.075077: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:19.085089: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695799.756969 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:03:20.109619: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:21.195218: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695801.646999 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:22.199095: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695802.973593 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bfe7394793e48787_ldcg-aarch64-02-39dfad05-1735997-614c7326006ac.tfrecord*. 2024-03-29 07:03:23.206014: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:24.211717: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695804.437194 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:25.225052: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695805.675592 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:26.244659: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695806.686626 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:03:27.345050: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695807.826621 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__67a379da15960763_ldcg-aarch64-02-39dfad05-1735997-614c73c51a412.tfrecord*. 2024-03-29 07:03:28.365046: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695808.965091 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:03:29.377034: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695809.606453 2428698 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5]: 0/1 streams completed; 1805/5000 splits assigned or completed. I0000 00:00:1711695809.713955 2431207 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9]: 0/1 streams completed; 2907/5000 splits assigned or completed. I0000 00:00:1711695810.016385 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:03:30.405056: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695811.155098 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__67a379da15960763_ldcg-aarch64-02-39dfad05-1735997-614c73c51a412.tfrecord*. 2024-03-29 07:03:31.415049: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:32.425049: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695812.515731 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:33.425255: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695813.845695 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:34.445047: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695815.266571 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:35.455051: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:36.465050: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695816.865151 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:37.475048: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:38.485041: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695819.028760 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:39.495238: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695820.481496 1961154 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__5b19cdb12f66d2f6_ldcg-aarch64-02-107fabfb-1735997-614c73c51a800.tfrecord*. 2024-03-29 07:03:40.496938: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:41.515110: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695821.576122 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:42.525038: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695822.865802 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:43.535054: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695824.332974 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:03:44.545057: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695825.390680 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__68fd73852510ba2d_ldcg-aarch64-02-4284e0a4-1735997-614c732553021.tfrecord*. 2024-03-29 07:03:45.555051: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695826.393622 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35f25c801ca4dec9_ldcg-aarch64-02-67210590-1735997-614c732557e42.tfrecord*. 2024-03-29 07:03:46.575067: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695827.393835 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fba2dd5b140d1268_ldcg-aarch64-02-67210590-1735997-614c73db2e02a.tfrecord*. 2024-03-29 07:03:47.585045: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695828.394340 1959023 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__cf5fe788bbad915f_ldcg-aarch64-02-4284e0a4-1735997-614c73db27225.tfrecord*. 2024-03-29 07:03:48.675051: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695829.394698 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__67a379da15960763_ldcg-aarch64-02-39dfad05-1735997-614c73c51a412.tfrecord*. 2024-03-29 07:03:49.716059: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695830.395312 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fba2dd5b140d1268_ldcg-aarch64-02-67210590-1735997-614c73db2e02a.tfrecord*. 2024-03-29 07:03:50.745047: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695831.416050 1961155 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__67a379da15960763_ldcg-aarch64-02-39dfad05-1735997-614c73c51a412.tfrecord*. 2024-03-29 07:03:51.765050: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:03:52.177227: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-03-29 07:03:52.177692: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 411us 2024-03-29 07:03:52.178353: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9/streams/stream_0/checkpoints 2024-03-29 07:03:52.178651: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9, stream: 0, compression: SNAPPY } I0000 00:00:1711695832.431374 2524147 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_9 I0000 00:00:1711695832.436792 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fba2dd5b140d1268_ldcg-aarch64-02-67210590-1735997-614c73db2e02a.tfrecord*. I0000 00:00:1711695832.556427 2524147 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1711695832.557105 2524147 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4, created stream_0 and assigned to localhost:45497 2024-03-29 07:03:52.586843: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4, stream: 0, compression: SNAPPY } 2024-03-29 07:03:52.587337: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4, stream 0, chunk 0. 2024-03-29 07:03:52.775084: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695833.502044 1959022 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8e1ffb3a7d2cdd50_ldcg-aarch64-02-67210590-1735997-614c73e1519b5.tfrecord*. 2024-03-29 07:03:53.717120: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-03-29 07:03:53.717617: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 435us 2024-03-29 07:03:53.718257: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5/streams/stream_0/checkpoints 2024-03-29 07:03:53.718526: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5, stream: 0, compression: SNAPPY } I0000 00:00:1711695833.774257 2534450 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_5 2024-03-29 07:03:53.795044: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695833.875587 2524147 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1711695833.876257 2524147 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7, created stream_0 and assigned to localhost:36915 2024-03-29 07:03:54.099176: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7, stream: 0, compression: SNAPPY } 2024-03-29 07:03:54.099624: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7, stream 0, chunk 0. I0000 00:00:1711695834.566323 2537073 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4c83c769d1564e30_ldcg-aarch64-02-67210590-1735997-614c73e27b25f.tfrecord*. 2024-03-29 07:03:54.815042: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695835.585637 2537073 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4c83c769d1564e30_ldcg-aarch64-02-67210590-1735997-614c73e27b25f.tfrecord*. 2024-03-29 07:03:55.825089: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695836.676369 2527678 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9c27ae39bbcd7b7d_ldcg-aarch64-02-48bf91d8-1735997-614c73e10e194.tfrecord*. 2024-03-29 07:03:56.836986: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695837.676682 2527678 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9c27ae39bbcd7b7d_ldcg-aarch64-02-48bf91d8-1735997-614c73e10e194.tfrecord*. 2024-03-29 07:03:57.845051: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695838.676977 2527678 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9c27ae39bbcd7b7d_ldcg-aarch64-02-48bf91d8-1735997-614c73e10e194.tfrecord*. 2024-03-29 07:03:58.865053: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695839.690240 2537073 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4c83c769d1564e30_ldcg-aarch64-02-67210590-1735997-614c73e27b25f.tfrecord*. 2024-03-29 07:03:59.885087: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695840.716855 2527677 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__747206fdf690c7d5_ldcg-aarch64-02-39dfad05-1735997-614c73e7e5b4a.tfrecord*. 2024-03-29 07:04:00.905355: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695841.725576 2527678 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b4b843c70e4f1f03_ldcg-aarch64-02-48bf91d8-1735997-614c73e81d1b8.tfrecord*. 2024-03-29 07:04:01.935415: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695842.735620 2537073 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__594d25752159f1fc_ldcg-aarch64-02-67210590-1735997-614c73e990e88.tfrecord*. 2024-03-29 07:04:02.945056: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695843.746457 2537073 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__594d25752159f1fc_ldcg-aarch64-02-67210590-1735997-614c73e990e88.tfrecord*. 2024-03-29 07:04:03.950588: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695844.747011 2537073 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__594d25752159f1fc_ldcg-aarch64-02-67210590-1735997-614c73e990e88.tfrecord*. 2024-03-29 07:04:04.950762: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695845.756605 2527678 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b4b843c70e4f1f03_ldcg-aarch64-02-48bf91d8-1735997-614c73e81d1b8.tfrecord*. 2024-03-29 07:04:05.951417: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695846.766292 2537074 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__24c553c23bda8f7b_ldcg-aarch64-02-4284e0a4-1735997-614c73e9b0618.tfrecord*. 2024-03-29 07:04:06.955051: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695847.766816 2537073 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__594d25752159f1fc_ldcg-aarch64-02-67210590-1735997-614c73e990e88.tfrecord*. 2024-03-29 07:04:07.965515: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695848.806413 2537073 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__594d25752159f1fc_ldcg-aarch64-02-67210590-1735997-614c73e990e88.tfrecord*. 2024-03-29 07:04:08.975067: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695849.806949 2527677 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6686c23aa86e5817_ldcg-aarch64-02-39dfad05-1735997-614c73f008efe.tfrecord*. 2024-03-29 07:04:09.926549: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-03-29 07:04:09.927059: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 449us 2024-03-29 07:04:09.927715: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4/streams/stream_0/checkpoints 2024-03-29 07:04:09.927993: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4, stream: 0, compression: SNAPPY } 2024-03-29 07:04:09.993853: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695850.026337 2621527 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_4 I0000 00:00:1711695850.136319 2621527 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1711695850.136988 2621527 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1, created stream_0 and assigned to localhost:45497 2024-03-29 07:04:10.196264: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1, stream: 0, compression: SNAPPY } 2024-03-29 07:04:10.196770: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1, stream 0, chunk 0. I0000 00:00:1711695850.807547 2627453 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__f273eae8c26fa6d0_ldcg-aarch64-02-48bf91d8-1735997-614c73f1da982.tfrecord*. 2024-03-29 07:04:11.027580: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695851.815621 2627453 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__f273eae8c26fa6d0_ldcg-aarch64-02-48bf91d8-1735997-614c73f1da982.tfrecord*. 2024-03-29 07:04:12.027742: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695852.815824 2627450 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4cb6cd959a1a7b13_ldcg-aarch64-02-39dfad05-1735997-614c73f1da858.tfrecord*. 2024-03-29 07:04:12.967178: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-03-29 07:04:12.967620: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 398us 2024-03-29 07:04:12.968286: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7/streams/stream_0/checkpoints 2024-03-29 07:04:12.968561: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7, stream: 0, compression: SNAPPY } I0000 00:00:1711695853.006000 2640096 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_7 2024-03-29 07:04:13.035041: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695853.107819 2640096 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1711695853.108444 2640096 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3, created stream_0 and assigned to localhost:36915 2024-03-29 07:04:13.259778: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3, stream: 0, compression: SNAPPY } 2024-03-29 07:04:13.260338: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3, stream 0, chunk 0. I0000 00:00:1711695853.816147 2627450 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4cb6cd959a1a7b13_ldcg-aarch64-02-39dfad05-1735997-614c73f1da858.tfrecord*. 2024-03-29 07:04:14.035260: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695854.845775 2627450 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4cb6cd959a1a7b13_ldcg-aarch64-02-39dfad05-1735997-614c73f1da858.tfrecord*. 2024-03-29 07:04:15.055047: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695855.897088 2627450 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4cb6cd959a1a7b13_ldcg-aarch64-02-39dfad05-1735997-614c73f1da858.tfrecord*. 2024-03-29 07:04:16.059791: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695856.905959 2648132 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ae04b7c44d182077_ldcg-aarch64-02-4284e0a4-1735997-614c73f4c1789.tfrecord*. 2024-03-29 07:04:17.095144: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695858.107240 2627450 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4cb6cd959a1a7b13_ldcg-aarch64-02-39dfad05-1735997-614c73f1da858.tfrecord*. 2024-03-29 07:04:18.115142: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695859.115673 2648132 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ae04b7c44d182077_ldcg-aarch64-02-4284e0a4-1735997-614c73f4c1789.tfrecord*. 2024-03-29 07:04:19.165384: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695860.115817 2648129 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1dc21310085e1b86_ldcg-aarch64-02-67210590-1735997-614c73f4c1653.tfrecord*. 2024-03-29 07:04:20.175268: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695861.116457 2627450 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3439423efe18cd88_ldcg-aarch64-02-39dfad05-1735997-614c73f9a0cd7.tfrecord*. 2024-03-29 07:04:21.186426: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695862.153592 2648129 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__136f4399243cba5e_ldcg-aarch64-02-67210590-1735997-614c73fd1ecfe.tfrecord*. 2024-03-29 07:04:22.195186: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695863.153837 2627453 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__e1b62b9f1ee81a05_ldcg-aarch64-02-48bf91d8-1735997-614c73f9c8373.tfrecord*. 2024-03-29 07:04:23.195587: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695864.155479 2627453 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__e1b62b9f1ee81a05_ldcg-aarch64-02-48bf91d8-1735997-614c73f9c8373.tfrecord*. 2024-03-29 07:04:24.205046: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695865.156206 2648132 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__de7c86a280a3b236_ldcg-aarch64-02-4284e0a4-1735997-614c73fd1ce64.tfrecord*. 2024-03-29 07:04:25.265056: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695866.165894 2648129 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__136f4399243cba5e_ldcg-aarch64-02-67210590-1735997-614c73fd1ecfe.tfrecord*. 2024-03-29 07:04:26.295193: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:04:26.730347: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-03-29 07:04:26.730826: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 428us 2024-03-29 07:04:26.731478: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1/streams/stream_0/checkpoints 2024-03-29 07:04:26.731740: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1, stream: 0, compression: SNAPPY } I0000 00:00:1711695866.926744 2718788 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_1 I0000 00:00:1711695867.046820 2718788 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1711695867.047481 2718788 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2, created stream_0 and assigned to localhost:45497 I0000 00:00:1711695867.173873 2648132 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__de7c86a280a3b236_ldcg-aarch64-02-4284e0a4-1735997-614c73fd1ce64.tfrecord*. 2024-03-29 07:04:27.175214: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2, stream: 0, compression: SNAPPY } 2024-03-29 07:04:27.175777: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2, stream 0, chunk 0. 2024-03-29 07:04:27.295454: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695868.176251 2648132 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__de7c86a280a3b236_ldcg-aarch64-02-4284e0a4-1735997-614c73fd1ce64.tfrecord*. 2024-03-29 07:04:28.305050: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695869.236235 2648132 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6a8860bc30055fc5_ldcg-aarch64-02-4284e0a4-1735997-614c74034b37c.tfrecord*. 2024-03-29 07:04:29.315049: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:04:29.395292: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-03-29 07:04:29.396053: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 662us 2024-03-29 07:04:29.396728: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3/streams/stream_0/checkpoints 2024-03-29 07:04:29.397022: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3, stream: 0, compression: SNAPPY } I0000 00:00:1711695869.497651 2716469 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_3 I0000 00:00:1711695869.605502 2716469 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1711695869.606233 2716469 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8, created stream_0 and assigned to localhost:36915 2024-03-29 07:04:29.825431: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8, stream: 0, compression: SNAPPY } 2024-03-29 07:04:29.826042: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8, stream 0, chunk 0. I0000 00:00:1711695870.315063 2735503 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__f752403506bb83e5_ldcg-aarch64-02-67210590-1735997-614c7404909d7.tfrecord*. 2024-03-29 07:04:30.345124: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695871.316195 2735503 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__f752403506bb83e5_ldcg-aarch64-02-67210590-1735997-614c7404909d7.tfrecord*. 2024-03-29 07:04:31.350151: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695872.316752 2725561 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6b53cf8e5db8ad43_ldcg-aarch64-02-39dfad05-1735997-614c74020b010.tfrecord*. 2024-03-29 07:04:32.365387: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:04:33.365586: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:04:33.365654: I tensorflow/core/data/service/dispatcher_impl.cc:1491] Lost worker localhost:36915 due to timeout I0000 00:00:1711695873.376775 2735505 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__21c60917715a9a08_ldcg-aarch64-02-4284e0a4-1735997-614c740491eb1.tfrecord*. 2024-03-29 07:04:34.375504: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695874.385424 2735505 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__21c60917715a9a08_ldcg-aarch64-02-4284e0a4-1735997-614c740491eb1.tfrecord*. 2024-03-29 07:04:35.382204: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695875.385737 2725561 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6b53cf8e5db8ad43_ldcg-aarch64-02-39dfad05-1735997-614c74020b010.tfrecord*. I0000 00:00:1711695876.385937 2725561 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a9222c60ea5adf37_ldcg-aarch64-02-39dfad05-1735997-614c740a640d1.tfrecord*. 2024-03-29 07:04:36.386695: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695877.387401 2735505 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39202d5464c7a9f9_ldcg-aarch64-02-4284e0a4-1735997-614c740afe4e6.tfrecord*. 2024-03-29 07:04:37.405037: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695878.387906 2735505 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39202d5464c7a9f9_ldcg-aarch64-02-4284e0a4-1735997-614c740afe4e6.tfrecord*. 2024-03-29 07:04:38.415083: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:04:39.425046: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695879.426783 2735503 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__cea29b8c89a6c0e0_ldcg-aarch64-02-67210590-1735997-614c740afd61b.tfrecord*. 2024-03-29 07:04:40.425362: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695880.487281 2725561 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a9222c60ea5adf37_ldcg-aarch64-02-39dfad05-1735997-614c740a640d1.tfrecord*. 2024-03-29 07:04:41.495059: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695881.495115 2735503 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__cea29b8c89a6c0e0_ldcg-aarch64-02-67210590-1735997-614c740afd61b.tfrecord*. I0000 00:00:1711695882.497404 2735503 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__cea29b8c89a6c0e0_ldcg-aarch64-02-67210590-1735997-614c740afd61b.tfrecord*. 2024-03-29 07:04:42.499410: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695883.497542 2735505 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39202d5464c7a9f9_ldcg-aarch64-02-4284e0a4-1735997-614c740afe4e6.tfrecord*. 2024-03-29 07:04:43.515055: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695884.497796 2735503 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__906eeff7b777f303_ldcg-aarch64-02-67210590-1735997-614c741212801.tfrecord*. 2024-03-29 07:04:44.517882: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:04:44.733171: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-03-29 07:04:44.733647: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 422us 2024-03-29 07:04:44.734334: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8/streams/stream_0/checkpoints 2024-03-29 07:04:44.734609: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8, stream: 0, compression: SNAPPY } I0000 00:00:1711695884.835595 2795731 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_8 I0000 00:00:1711695884.936770 2795731 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1711695885.055698 2795731 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, created stream_0 and assigned to localhost:36915 2024-03-29 07:04:45.083296: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, stream: 0, compression: SNAPPY } 2024-03-29 07:04:45.151915: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-03-29 07:04:45.152355: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 388us 2024-03-29 07:04:45.152988: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2/streams/stream_0/checkpoints 2024-03-29 07:04:45.153266: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2, stream: 0, compression: SNAPPY } 2024-03-29 07:04:45.206683: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, stream 0, chunk 0. I0000 00:00:1711695885.375295 2797288 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_2 I0000 00:00:1711695885.496826 2785902 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, created stream_1 and assigned to localhost:45497 I0000 00:00:1711695885.505529 2798268 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2c6d632d35adea39_ldcg-aarch64-02-39dfad05-1735997-614c74133f38c.tfrecord*. 2024-03-29 07:04:45.525415: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:04:45.625255: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, stream: 1, compression: SNAPPY } 2024-03-29 07:04:45.625777: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, stream 1, chunk 0. I0000 00:00:1711695886.505842 2798268 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2c6d632d35adea39_ldcg-aarch64-02-39dfad05-1735997-614c74133f38c.tfrecord*. 2024-03-29 07:04:46.535561: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695887.517798 2799867 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__901431c28c78e838_ldcg-aarch64-02-49d69c8a-1735997-614c7413a34f4.tfrecord*. 2024-03-29 07:04:47.535735: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695888.526300 2798268 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2c6d632d35adea39_ldcg-aarch64-02-39dfad05-1735997-614c74133f38c.tfrecord*. 2024-03-29 07:04:48.605907: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695889.526894 2798267 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6471bec4188089b4_ldcg-aarch64-02-1393eea6-1735997-614c74133cc66.tfrecord*. 2024-03-29 07:04:49.609223: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695890.546418 2799867 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__901431c28c78e838_ldcg-aarch64-02-49d69c8a-1735997-614c7413a34f4.tfrecord*. 2024-03-29 07:04:50.609403: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695891.547005 2798268 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2c6d632d35adea39_ldcg-aarch64-02-39dfad05-1735997-614c74133f38c.tfrecord*. 2024-03-29 07:04:51.609594: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695892.547202 2799868 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__52ed098a60fd14c6_ldcg-aarch64-02-726f4eb1-1735997-614c7413a34f4.tfrecord*. 2024-03-29 07:04:52.625040: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1711695893.553679 2798268 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__667bc3050a93c71a_ldcg-aarch64-02-39dfad05-1735997-614c741a40512.tfrecord*. 2024-03-29 07:04:53.635078: W tensorflow/core/data/service/dispatcher_impl.cc:1403] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-03-29 07:04:54.015748: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2695, chunk size: 36.8457KB. 2024-03-29 07:04:54.016318: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_0/checkpoints/checkpoint_4_2695. Checkpointing distributed tf.data snapshot writer took 510us 2024-03-29 07:04:54.035292: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, stream: 1, compression: SNAPPY }. Stream 1, chunk 0, number of elements in chunk: 2305, chunk size: 31.5137KB. 2024-03-29 07:04:54.035933: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_1/checkpoints/checkpoint_2_2305. Checkpointing distributed tf.data snapshot writer took 568us 2024-03-29 07:04:54.036331: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_1/checkpoints 2024-03-29 07:04:54.036627: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, stream: 1, compression: SNAPPY } 2024-03-29 07:04:54.065293: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6/streams/stream_0/checkpoints 2024-03-29 07:04:54.065645: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6, stream: 0, compression: SNAPPY } I0000 00:00:1711695894.206501 2829683 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/9da16ec48d3c4918370a599d781d43a379s9lunc/tmpgtb4ed6t/tmps79xhvio/tf_data_snapshot_6 2024-03-29 07:04:54.308245: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 45497 2024-03-29 07:04:54.349251: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 36915 2024-03-29 07:04:54.460001: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45527 [ OK ] SnapshotFtTest.testWorkersDontExceedMaxStreamAssignments_test_mode_graph_tfapiversion_2_workermaxconcurrentsnapshots_1 ---------------------------------------------------------------------- Ran 11 tests in 283.318s OK (skipped=6) -- Test timed out at 2024-03-29 07:14:04 UTC -- Current thread 0x0000ffff8a117430 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/lib/io/file_io.py", line 676 in delete_recursively_v2 File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/lib/io/file_io.py", line 663 in delete_recursively File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 82 in delete_temp_dir ================================================================================ ==================== Test output for //tensorflow/python/eager:small_constants_optimizer_test_cpu: 2024-03-29 07:10:44.907468: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.12.0: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/python_aarch64-unknown-linux-gnu/bin/python3 [ RUN ] FunctionTest.test_grappler_optimization [ FAILED ] FunctionTest.test_grappler_optimization INFO:tensorflow:time(__main__.FunctionTest.test_grappler_optimization): 96.95s I0329 07:13:00.422414 281472833057840 test_util.py:2634] time(__main__.FunctionTest.test_grappler_optimization): 96.95s [ RUN ] FunctionTest.test_session [ SKIPPED ] FunctionTest.test_session [ RUN ] FunctionTest.test_small_constants_optimization_disabled WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1971: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. W0329 07:13:00.963916 281472833057840 deprecation.py:50] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1971: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. [ SKIPPED ] FunctionTest.test_small_constants_optimization_disabled INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_disabled): 0.0s I0329 07:13:00.964623 281472833057840 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_disabled): 0.0s [ RUN ] FunctionTest.test_small_constants_optimization_invalid_input INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_invalid_input): 0.29s I0329 07:13:01.257695 281472833057840 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_invalid_input): 0.29s [ OK ] FunctionTest.test_small_constants_optimization_invalid_input [ RUN ] FunctionTest.test_small_constants_optimization_with_grappler INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_with_grappler): 87.02s I0329 07:14:28.274921 281472833057840 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_with_grappler): 87.02s [ OK ] FunctionTest.test_small_constants_optimization_with_grappler [ RUN ] FunctionTest.test_small_constants_optimization_without_grappler INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_without_grappler): 120.51s I0329 07:16:28.794497 281472833057840 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_without_grappler): 120.51s [ OK ] FunctionTest.test_small_constants_optimization_without_grappler ====================================================================== FAIL: test_grappler_optimization (__main__.FunctionTest.test_grappler_optimization) FunctionTest.test_grappler_optimization ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py", line 1934, in decorated return f(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/small_constants_optimizer_test.py", line 71, in test_grappler_optimization self.assertLess(opt_benchmark * 3, benchmark) AssertionError: 0.7996353451162577 not less than 0.6920379512012005 ---------------------------------------------------------------------- Ran 6 tests in 305.328s FAILED (failures=1, skipped=2) ================================================================================ ==================== Test output for //tensorflow/python/eager:small_constants_optimizer_test_cpu: 2024-03-29 07:16:39.712357: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.12.0: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/python_aarch64-unknown-linux-gnu/bin/python3 [ RUN ] FunctionTest.test_grappler_optimization [ FAILED ] FunctionTest.test_grappler_optimization INFO:tensorflow:time(__main__.FunctionTest.test_grappler_optimization): 114.2s I0329 07:18:36.316704 281473227715632 test_util.py:2634] time(__main__.FunctionTest.test_grappler_optimization): 114.2s [ RUN ] FunctionTest.test_session [ SKIPPED ] FunctionTest.test_session [ RUN ] FunctionTest.test_small_constants_optimization_disabled WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1971: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. W0329 07:18:36.829221 281473227715632 deprecation.py:50] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1971: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. [ SKIPPED ] FunctionTest.test_small_constants_optimization_disabled INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_disabled): 0.0s I0329 07:18:36.829972 281473227715632 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_disabled): 0.0s [ RUN ] FunctionTest.test_small_constants_optimization_invalid_input INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_invalid_input): 0.28s I0329 07:18:37.116211 281473227715632 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_invalid_input): 0.28s [ OK ] FunctionTest.test_small_constants_optimization_invalid_input [ RUN ] FunctionTest.test_small_constants_optimization_with_grappler [ FAILED ] FunctionTest.test_small_constants_optimization_with_grappler INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_with_grappler): 104.85s I0329 07:20:21.971188 281473227715632 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_with_grappler): 104.85s [ RUN ] FunctionTest.test_small_constants_optimization_without_grappler INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_without_grappler): 126.64s I0329 07:22:29.212943 281473227715632 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_without_grappler): 126.64s [ OK ] FunctionTest.test_small_constants_optimization_without_grappler ====================================================================== FAIL: test_grappler_optimization (__main__.FunctionTest.test_grappler_optimization) FunctionTest.test_grappler_optimization ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py", line 1934, in decorated return f(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/small_constants_optimizer_test.py", line 71, in test_grappler_optimization self.assertLess(opt_benchmark * 3, benchmark) AssertionError: 0.6819345597177744 not less than 0.5021862797439098 ====================================================================== FAIL: test_small_constants_optimization_with_grappler (__main__.FunctionTest.test_small_constants_optimization_with_grappler) FunctionTest.test_small_constants_optimization_with_grappler ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py", line 1934, in decorated return f(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/small_constants_optimizer_test.py", line 102, in test_small_constants_optimization_with_grappler self.assertLess(opt_benchmark * 2, benchmark) AssertionError: 0.6717121116816998 not less than 0.6317678224295378 ---------------------------------------------------------------------- Ran 6 tests in 347.105s FAILED (failures=2, skipped=2) ================================================================================ ==================== Test output for //tensorflow/python/eager:small_constants_optimizer_test_cpu: 2024-03-29 07:22:39.909103: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.12.0: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/python_aarch64-unknown-linux-gnu/bin/python3 [ RUN ] FunctionTest.test_grappler_optimization [ FAILED ] FunctionTest.test_grappler_optimization INFO:tensorflow:time(__main__.FunctionTest.test_grappler_optimization): 94.64s I0329 07:24:17.305316 281473816163376 test_util.py:2634] time(__main__.FunctionTest.test_grappler_optimization): 94.64s [ RUN ] FunctionTest.test_session [ SKIPPED ] FunctionTest.test_session [ RUN ] FunctionTest.test_small_constants_optimization_disabled WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1971: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. W0329 07:24:17.482263 281473816163376 deprecation.py:50] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1971: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. [ SKIPPED ] FunctionTest.test_small_constants_optimization_disabled INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_disabled): 0.0s I0329 07:24:17.483014 281473816163376 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_disabled): 0.0s [ RUN ] FunctionTest.test_small_constants_optimization_invalid_input INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_invalid_input): 0.37s I0329 07:24:17.849872 281473816163376 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_invalid_input): 0.37s [ OK ] FunctionTest.test_small_constants_optimization_invalid_input [ RUN ] FunctionTest.test_small_constants_optimization_with_grappler INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_with_grappler): 91.78s I0329 07:25:49.635439 281473816163376 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_with_grappler): 91.78s [ OK ] FunctionTest.test_small_constants_optimization_with_grappler [ RUN ] FunctionTest.test_small_constants_optimization_without_grappler INFO:tensorflow:time(__main__.FunctionTest.test_small_constants_optimization_without_grappler): 98.97s I0329 07:27:28.618658 281473816163376 test_util.py:2634] time(__main__.FunctionTest.test_small_constants_optimization_without_grappler): 98.97s [ OK ] FunctionTest.test_small_constants_optimization_without_grappler ====================================================================== FAIL: test_grappler_optimization (__main__.FunctionTest.test_grappler_optimization) FunctionTest.test_grappler_optimization ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py", line 1934, in decorated return f(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/eager/small_constants_optimizer_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/small_constants_optimizer_test.py", line 71, in test_grappler_optimization self.assertLess(opt_benchmark * 3, benchmark) AssertionError: 0.4030857514590025 not less than 0.37429321371018887 ---------------------------------------------------------------------- Ran 6 tests in 285.966s FAILED (failures=1, skipped=2) ================================================================================ //tensorflow/c:c_api_experimental_test PASSED in 85.7s //tensorflow/c:c_api_function_test PASSED in 73.4s //tensorflow/c:c_api_test_cpu PASSED in 80.7s //tensorflow/c:c_test PASSED in 52.0s //tensorflow/c:env_test_cpu PASSED in 44.9s //tensorflow/c:kernels_test_cpu PASSED in 78.5s //tensorflow/c:ops_test PASSED in 66.3s //tensorflow/c:tf_status_helper_test PASSED in 1.6s //tensorflow/c:while_loop_test PASSED in 60.2s //tensorflow/c/eager:c_api_cluster_test_cpu PASSED in 77.9s //tensorflow/c/eager:c_api_remote_function_test_cpu PASSED in 78.7s //tensorflow/c/eager:c_api_remote_test_cpu PASSED in 77.5s //tensorflow/c/eager:c_api_test_cpu PASSED in 78.0s //tensorflow/c/eager:custom_device_test PASSED in 72.5s //tensorflow/c/eager:dlpack_test_cpu PASSED in 64.7s //tensorflow/c/eager/parallel_device:parallel_device_lib_test PASSED in 67.7s //tensorflow/c/eager/parallel_device:parallel_device_remote_test PASSED in 66.5s //tensorflow/c/eager/parallel_device:parallel_device_test PASSED in 62.4s //tensorflow/c/experimental/filesystem/plugins/gcs:expiring_lru_cache_test PASSED in 0.7s //tensorflow/c/experimental/filesystem/plugins/gcs:ram_file_block_cache_test PASSED in 2.9s //tensorflow/c/experimental/grappler:grappler_test PASSED in 61.6s //tensorflow/c/experimental/next_pluggable_device:tensor_pjrt_buffer_util_test PASSED in 25.2s //tensorflow/c/experimental/ops/gen/common:case_format_test PASSED in 2.7s //tensorflow/c/experimental/ops/gen/cpp:cpp_generator_test PASSED in 2.7s //tensorflow/c/experimental/ops/gen/cpp/renderers:renderer_test PASSED in 1.9s //tensorflow/c/experimental/saved_model/core:constant_loading_test PASSED in 59.9s //tensorflow/c/experimental/saved_model/core:object_graph_traversal_test PASSED in 36.4s //tensorflow/c/experimental/saved_model/core:saved_variable_loading_test PASSED in 67.8s //tensorflow/c/experimental/saved_model/core:signature_flattening_test PASSED in 22.0s //tensorflow/c/experimental/saved_model/core:tf_concrete_function_loading_test PASSED in 32.3s //tensorflow/c/experimental/saved_model/core/ops:restore_ops_test PASSED in 25.9s //tensorflow/c/experimental/saved_model/core/ops:variable_ops_test PASSED in 37.7s //tensorflow/c/experimental/saved_model/internal:saved_model_api_test PASSED in 63.0s //tensorflow/c/experimental/stream_executor:stream_executor_test PASSED in 0.8s //tensorflow/c/kernels:bitcast_op_test PASSED in 1.4s //tensorflow/c/kernels:summary_op_benchmark_test PASSED in 1.8s //tensorflow/c/kernels:summary_op_test PASSED in 1.5s //tensorflow/c/kernels:tensor_shape_utils_test PASSED in 0.6s //tensorflow/cc:cc_op_gen_test PASSED in 0.6s //tensorflow/cc:client_client_session_test PASSED in 4.6s //tensorflow/cc:coordinator_test PASSED in 9.2s //tensorflow/cc:framework_cc_ops_test PASSED in 6.3s //tensorflow/cc:framework_gradient_checker_test PASSED in 6.5s //tensorflow/cc:framework_gradients_test PASSED in 13.5s //tensorflow/cc:framework_scope_test PASSED in 1.7s //tensorflow/cc:framework_while_gradients_test PASSED in 5.4s //tensorflow/cc:gradients_array_grad_test PASSED in 13.9s //tensorflow/cc:gradients_data_flow_grad_test PASSED in 6.3s //tensorflow/cc:gradients_functional_grad_test PASSED in 7.6s //tensorflow/cc:gradients_image_grad_test PASSED in 18.0s //tensorflow/cc:gradients_linalg_grad_test PASSED in 8.7s //tensorflow/cc:gradients_manip_grad_test PASSED in 6.7s //tensorflow/cc:gradients_math_grad_test PASSED in 15.7s //tensorflow/cc:gradients_nn_grad_test PASSED in 15.1s //tensorflow/cc:gradients_resource_variable_grad_test PASSED in 7.2s //tensorflow/cc:ops_const_op_test PASSED in 1.1s //tensorflow/cc:ops_while_loop_test PASSED in 10.0s //tensorflow/cc:queue_runner_test PASSED in 20.4s //tensorflow/cc/experimental/base/tests:tensor_test PASSED in 0.5s //tensorflow/cc/experimental/base/tests:tensorhandle_test PASSED in 80.1s //tensorflow/cc/experimental/libexport:load_test PASSED in 0.6s //tensorflow/cc/experimental/libexport:save_test PASSED in 0.8s //tensorflow/cc/experimental/libtf:libtf_module_test PASSED in 63.9s //tensorflow/cc/experimental/libtf:libtf_object_test PASSED in 0.9s //tensorflow/cc/experimental/libtf:libtf_perf_test PASSED in 0.7s //tensorflow/cc/experimental/libtf:libtf_runtime_test PASSED in 62.2s //tensorflow/cc/experimental/libtf:libtf_transform_test PASSED in 66.6s //tensorflow/cc/experimental/libtf:libtf_value_test PASSED in 0.7s //tensorflow/cc/experimental/libtf:libtf_visit_test PASSED in 0.9s //tensorflow/cc/experimental/libtf/impl:iostream_test PASSED in 0.5s //tensorflow/cc/experimental/libtf/impl:none_test PASSED in 0.6s //tensorflow/cc/experimental/libtf/impl:scalars_test PASSED in 0.7s //tensorflow/cc/experimental/libtf/impl:string_test PASSED in 0.6s //tensorflow/cc/experimental/libtf/impl:tensor_spec_test PASSED in 0.7s //tensorflow/cc/saved_model:bundle_v2_test PASSED in 0.7s //tensorflow/cc/saved_model:fingerprinting_chunked_test PASSED in 0.7s //tensorflow/cc/saved_model:fingerprinting_test PASSED in 1.9s //tensorflow/cc/saved_model:fingerprinting_utils_test PASSED in 2.7s //tensorflow/cc/saved_model:metrics_test PASSED in 0.7s //tensorflow/cc/saved_model:reader_test PASSED in 0.7s //tensorflow/cc/saved_model:saved_model_bundle_lite_test PASSED in 14.4s //tensorflow/cc/saved_model:saved_model_bundle_test PASSED in 20.3s //tensorflow/cc/saved_model:util_test PASSED in 0.6s //tensorflow/cc/saved_model/experimental/tests:saved_model_api_test PASSED in 56.4s //tensorflow/cc/tools:freeze_saved_model_test PASSED in 6.8s //tensorflow/compiler/aot:codegen_test PASSED in 59.2s //tensorflow/compiler/jit:compilability_check_util_test PASSED in 30.7s //tensorflow/compiler/jit:deadness_analysis_test PASSED in 24.7s //tensorflow/compiler/jit:device_compilation_cache_test PASSED in 7.8s //tensorflow/compiler/jit:device_compilation_cluster_signature_test PASSED in 14.0s //tensorflow/compiler/jit:device_compilation_profiler_test PASSED in 52.8s //tensorflow/compiler/jit:device_compiler_client_test PASSED in 13.0s //tensorflow/compiler/jit:device_compiler_disable_test PASSED in 39.6s //tensorflow/compiler/jit:device_executable_persistor_test PASSED in 44.0s //tensorflow/compiler/jit:device_util_test PASSED in 8.5s //tensorflow/compiler/jit:encapsulate_util_test PASSED in 1.7s //tensorflow/compiler/jit:node_matchers_test PASSED in 2.5s //tensorflow/compiler/jit:resource_operation_safety_analysis_test PASSED in 17.2s //tensorflow/compiler/jit:shape_inference_test PASSED in 1.9s //tensorflow/compiler/jit:xla_activity_listener_test PASSED in 44.2s //tensorflow/compiler/jit:xla_cluster_util_test PASSED in 16.1s //tensorflow/compiler/jit:xla_compile_util_test PASSED in 12.6s //tensorflow/compiler/jit:xla_kernel_creator_test PASSED in 24.5s //tensorflow/compiler/jit:xla_launch_util_test PASSED in 39.4s //tensorflow/compiler/jit/tests:auto_clustering_test PASSED in 50.9s //tensorflow/compiler/mlir:mlir_graph_optimization_pass_test PASSED in 22.0s //tensorflow/compiler/mlir:register_common_dialects_test PASSED in 42.2s //tensorflow/compiler/mlir/lite:lstm_utils_test PASSED in 3.9s //tensorflow/compiler/mlir/lite:offset_buffer_test PASSED in 0.8s //tensorflow/compiler/mlir/lite:perception_ops_utils_test PASSED in 2.9s //tensorflow/compiler/mlir/lite:size_utils_test PASSED in 0.8s //tensorflow/compiler/mlir/lite:tftext_utils_test PASSED in 1.1s //tensorflow/compiler/mlir/lite/debug:debug_test PASSED in 2.2s //tensorflow/compiler/mlir/lite/experimental/remat:rematerializer_test PASSED in 2.3s //tensorflow/compiler/mlir/lite/experimental/tac:execution_metadata_exporter_test PASSED in 31.1s //tensorflow/compiler/mlir/lite/experimental/tac/tests:compute-cost.mlir.test PASSED in 3.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:device-transform-gpu.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/experimental/tac/tests:device-transform-nnapi.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/lite/experimental/tac/tests:fold-constants-to-subgraph.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/experimental/tac/tests:get-alternative-subgraph.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/experimental/tac/tests:get-op-cost.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/experimental/tac/tests:pick-subgraphs.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/experimental/tac/tests:raise-target-subgraphs.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/lite/experimental/tac/tests:tac-filter.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/experimental/tac/tests:target-annotation.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/experimental/tac/tests/e2e:device-transform-nnapi.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/experimental/tac/tests/e2e:simple-graph.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/metrics:error_collector_inst_test PASSED in 0.8s //tensorflow/compiler/mlir/lite/quantization:numerical_utils_test PASSED in 0.7s //tensorflow/compiler/mlir/lite/quantization/lite:quantize_model_test PASSED in 18.6s //tensorflow/compiler/mlir/lite/quantization/stablehlo:quantization_test PASSED in 38.0s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:fallback_to_flex_ops_default.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:fallback_to_flex_ops_legacy.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:tf_to_quant.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:tf_to_quant_4bit.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/quantization/tests:import_quant_stats.mlir.test PASSED in 3.2s //tensorflow/compiler/mlir/lite/sparsity:sparsify_model_test PASSED in 3.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:call_xla_module_to_stablehlo.mlir.test PASSED in 2.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:compose-uniform-quantized-type.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/stablehlo/tests:fold_broadcast.mlir.test PASSED in 3.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:fuse_mhlo_convolution.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-inplaceupdate.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-skip-quantization-ops.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-skip-stateful-partition-calls.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-composite.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-vhlo.mlir.test PASSED in 3.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-add.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-broadcast.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-clamp.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-concat.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-constant.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-conv.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-max.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-mul.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-pad.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-reshape.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-rsqrt.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-sub.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize_hlo.mlir.test PASSED in 3.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:odml-to-stablehlo-allow-tf.mlir.test PASSED in 3.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:odml-to-stablehlo-smuggle-resize.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:optimize.mlir.test PASSED in 3.4s //tensorflow/compiler/mlir/lite/stablehlo/tests:stablehlo-custom-call-legalize-composite.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-clamp.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-concat.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-conv.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-division.mlir.test PASSED in 3.4s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-logistic.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-multiply.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-resize-bilinear.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-tf-quantize.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:tfl_legalize_hlo.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:tfl_legalize_hlo_custom_call.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:unfold_splat_constant_pass.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:unfuse_mhlo_batch_norm.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:uniform-quantized-stablehlo-to-tfl.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/tests:analyze-variables.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests:canonicalize.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests:const-fold.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/tests:decompose-hybrid-quantization.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/tests:default_quant_params.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/tests:dilated-conv.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/lite/tests:fuse-tftext.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/lite/tests:get-arithmetic-count.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/tests:guarantee_func_has_one_use.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests:inlining.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests:insert_call_once_op.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/tests:legalize-tensorlist.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/lite/tests:legalize-tf-assert.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/lite/tests:legalize-tf-hashtables.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests:legalize-tf-no-runtime-verification.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/lite/tests:legalize-tf-variables.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests:legalize-tf-while.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/lite/tests:legalize-tf.mlir.test PASSED in 5.4s //tensorflow/compiler/mlir/lite/tests:legalize_jax_random.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/tests:lift_tflite_flex_ops.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list-default-to-single-batch.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list-enable-dynamic-update-slice.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests:modify_io_nodes.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/lite/tests:ops.mlir.test PASSED in 3.4s //tensorflow/compiler/mlir/lite/tests:optimize-after-quantization.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests:optimize.mlir.test PASSED in 7.5s //tensorflow/compiler/mlir/lite/tests:optimize_batch_matmul.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests:optimize_functional_ops.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/lite/tests:optimize_no_verify.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests:optimize_op_order.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests:partitioned-topological-sort.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests:pin-ops-with-side-effects.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests:post-quantize-dynamic-range.mlir.test PASSED in 3.5s //tensorflow/compiler/mlir/lite/tests:post-quantize.mlir.test PASSED in 3.9s //tensorflow/compiler/mlir/lite/tests:prepare-composite-functions-tf.mlir.test PASSED in 4.0s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-dynamic-range.mlir.test PASSED in 4.4s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-post-training-16bits.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-post-training.mlir.test PASSED in 4.1s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-signed.mlir.test PASSED in 2.9s //tensorflow/compiler/mlir/lite/tests:prepare-quantize.mlir.test PASSED in 3.2s //tensorflow/compiler/mlir/lite/tests:prepare-tf-fake-quant-4bit.mlir.test PASSED in 4.2s //tensorflow/compiler/mlir/lite/tests:prepare-tf-fake-quant.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/tests:prepare-tf-with-allowing-bf16-and-f16-type-legalization.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests:prepare-tf.mlir.test PASSED in 3.5s //tensorflow/compiler/mlir/lite/tests:push-tpose-through-ewise.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests:quantize-dynamic-range.mlir.test PASSED in 6.6s //tensorflow/compiler/mlir/lite/tests:quantize-numeric-verify.mlir.test PASSED in 3.2s //tensorflow/compiler/mlir/lite/tests:quantize-variables.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/lite/tests:quantize.mlir.test PASSED in 5.0s //tensorflow/compiler/mlir/lite/tests:raise-custom-ops.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests:reduce-type-precision.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/tests:reduce_while_operands.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/lite/tests:shape-inference.mlir.test PASSED in 2.9s //tensorflow/compiler/mlir/lite/tests:split-merged-operands.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests:tfl_while_op_licm.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/tests:tfl_while_outline.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/lite/tests:trim-functions-tf.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/tests:unfold-large-splat-constant.mlir.test PASSED in 3.4s //tensorflow/compiler/mlir/lite/tests/debuginfo:v1_1.0_224_frozen.wrong_attr.line.part.pbtxt.test PASSED in 3.5s //tensorflow/compiler/mlir/lite/tests/debuginfo:v1_1.0_224_frozen.wrong_attr.stack.part.pbtxt.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/tests/end2end:add.pbtxt.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/end2end:back2back_fake_quant.pbtxt.test PASSED in 2.5s //tensorflow/compiler/mlir/lite/tests/end2end:control_flow_v1.pbtxt.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests/end2end:conv_2d.pbtxt.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/tests/end2end:conv_2d_nchw.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests/end2end:custom_opdef.pbtxt.test PASSED in 2.9s //tensorflow/compiler/mlir/lite/tests/end2end:disallow_stateful_partitioned_call.pbtxt.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_per_channel.pbtxt.test PASSED in 2.9s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_per_channel_4bit.pbtxt.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_without_identity.pbtxt.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_without_identity_4bit.pbtxt.test PASSED in 2.8s //tensorflow/compiler/mlir/lite/tests/end2end:graph-input-node.pbtxt.test PASSED in 2.8s //tensorflow/compiler/mlir/lite/tests/end2end:graph_with_placeholder_with_default.pbtxt.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests/end2end:if_op.pbtxt.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests/end2end:quant_stats.pbtxt.test PASSED in 2.9s //tensorflow/compiler/mlir/lite/tests/end2end:unroll_batch_matmul.pbtxt.test PASSED in 3.0s //tensorflow/compiler/mlir/lite/tests/end2end:unroll_batch_matmul_disabled.pbtxt.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:basic_lstm.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:bucketize.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:cast_bf16.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:constants.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:constants_offset.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:control_edges.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:custom_op.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:custom_op_offset.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:dynamic_shape.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:empty_input_output_names.json.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:external_constant.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:if_op.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:import_json.json.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:importer_test_min_max.cc.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:importer_test_min_max.cc.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:input_arrays.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:input_output_names_attr.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:legacy_reshape.json.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:lstm.json.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:lstm.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:many_attribute_op.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:math.mlir.test PASSED in 3.3s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:matmul.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:mix_tflite_vhlo.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:multi_output_op.json.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:optional.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:optional_input.json.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:output_arrays.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:pruning.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:pruning_function_input_as_output.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:quant_stats.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:quantization.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:reshape.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:signature.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:signature_with_multiple_entry_points.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:simple.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:tf_variant_type.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:unranked_function_output.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:unranked_tensor.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:variable.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:vhlo.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:vhlo_const.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:vhlo_custom_call.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:while_op.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/tests/mlir2exec:tfl_while_op.mlir.test PASSED in 3.2s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:basic_lstm.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:bucketize.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:cast_bf16.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:custom_op_with_tflite_op.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:custom_tensorlist_reserve.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:deduplicate_const.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:depthwise_conv2d.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:depthwise_conv2d_v2.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_builtin.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_custom.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_flex.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_flex_enable_builtin.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:dynamic_shape_constant.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fake_quant.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_exclusively.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_complex128.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_f64.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_tflite_op.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fully_connected.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fully_connected_v2.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:hashtable_resource.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:if_op.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:logical.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:low_bit_packing.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm_asym_attr.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm_quantized.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:math.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:metadata.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:mul_v2.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:mul_v3.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:nn.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:numeric_verify.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:optional.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:quantization.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:reshape.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_output_override.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_with_multiple_entry_points.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_with_no_inputs.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple_with_connected_control_nodes.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple_with_unconnected_control_nodes.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:svdf.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:svdf_v2.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:tf_entry_function.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:tfl_while_op.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:transpose_conv_optional.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:type_attr.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:u16_quant.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unidirectional_sequence_lstm.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unidirectional_sequence_rnn.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unranked_tensor.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unsorted_segment_prod.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variable.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variant_type_on_func.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variant_type_on_op.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:while_op.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/quantization/common:attrs_and_constraints_test PASSED in 10.3s //tensorflow/compiler/mlir/quantization/common:func_test PASSED in 13.7s //tensorflow/compiler/mlir/quantization/common:lift_as_function_call_test PASSED in 11.6s //tensorflow/compiler/mlir/quantization/common:uniform_quantized_types_test PASSED in 10.3s //tensorflow/compiler/mlir/quantization/common/python:testing_test PASSED in 11.8s //tensorflow/compiler/mlir/quantization/common/quantization_lib:quantization_driver_test PASSED in 13.2s //tensorflow/compiler/mlir/quantization/stablehlo:bfloat16_type_test PASSED in 26.7s //tensorflow/compiler/mlir/quantization/stablehlo:convert_tf_quant_to_mhlo_int_test PASSED in 23.5s //tensorflow/compiler/mlir/quantization/stablehlo:convert_tf_quant_types_test PASSED in 22.2s //tensorflow/compiler/mlir/quantization/stablehlo:math_utils_test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/stablehlo:stablehlo_type_utils_test PASSED in 1.1s //tensorflow/compiler/mlir/quantization/stablehlo:tf_type_utils_test PASSED in 30.9s //tensorflow/compiler/mlir/quantization/stablehlo/cc:config_test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/stablehlo/cc:graph_def_test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/stablehlo/cc:io_test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/stablehlo/cc:permutation_test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/stablehlo/cc:pre_calibration_test PASSED in 22.5s //tensorflow/compiler/mlir/quantization/stablehlo/cc:report_test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/stablehlo/cc:saved_model_export_test PASSED in 16.9s //tensorflow/compiler/mlir/quantization/stablehlo/cc:saved_model_import_test PASSED in 23.5s //tensorflow/compiler/mlir/quantization/stablehlo/cc/calibration:representative_dataset_test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/stablehlo/ops:stablehlo_op_quant_spec_test PASSED in 16.0s //tensorflow/compiler/mlir/quantization/stablehlo/tests:fill_quantization_options_test PASSED in 5.8s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:calibration_algorithm_test PASSED in 43.6s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:calibration_statistics_collector_test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:calibrator_singleton_test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:custom_aggregator_op_test PASSED in 41.2s //tensorflow/compiler/mlir/quantization/tensorflow/cc:const_op_size_test PASSED in 1.1s //tensorflow/compiler/mlir/quantization/tensorflow/cc:constant_fold_test PASSED in 16.1s //tensorflow/compiler/mlir/quantization/tensorflow/cc:convert_asset_args_test PASSED in 6.6s //tensorflow/compiler/mlir/quantization/tensorflow/cc:save_variables_test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/debugging:mlir_dump_test PASSED in 1.5s //tensorflow/compiler/mlir/quantization/tensorflow/ops:tf_op_quant_spec_test PASSED in 1.1s //tensorflow/compiler/mlir/quantization/tensorflow/ops:tf_quantize_op_test PASSED in 2.2s //tensorflow/compiler/mlir/quantization/tensorflow/python:concurrency_test PASSED in 89.6s //tensorflow/compiler/mlir/quantization/tensorflow/python:py_function_lib_py_test PASSED in 36.9s //tensorflow/compiler/mlir/quantization/tensorflow/python:pywrap_quantize_model_test PASSED in 41.8s //tensorflow/compiler/mlir/quantization/tensorflow/python:representative_dataset_test PASSED in 19.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:add_dump_tensor_op.mlir.test PASSED in 3.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:add_dump_tensor_op_stablehlo.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:add_quantization_unit_loc.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:cast_bf16_ops_to_f32.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_custom_aggregation_op_to_quant_stats.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_fake_quant_to_qdq.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_tf_xla_op_to_tf_op.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_tpu_model_to_cpu.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:duplicate_shape_determining_constants.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/quantization/tensorflow/tests:fake_quant_e2e_flow.mlir.test PASSED in 3.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:fake_quant_e2e_xla.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_custom_aggregation_ops.mlir.test PASSED in 3.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_main_function.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions_drq.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions_weight_only.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_restore_op.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_save_op.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:issue_ids_of_custom_aggregation_ops.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_hashtable_ops_as_args.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_drq.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_drq_min_elements.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_xla.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_xla_selective_quantization.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:mark_functions_noinline.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_duplicate_resource_ops.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_initializer_function_ops_to_main.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_save_function_ops_to_main.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:optimize.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_lifting.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_drq.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_drq_per_channel.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_ptq.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_ptq_per_channel.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:preprocess_op.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/quantization/tensorflow/tests:preprocess_op_weight_only.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:propagate_quantize_type.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composit_functions_debugging.mlir.test PASSED in 10.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions.mlir.test PASSED in 3.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_drq.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_weight_only.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_xla.mlir.test PASSED in 7.2s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_drq.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_weights.mlir.test PASSED in 3.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_xla.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:remove_var_init_by_const.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:replace_cast_hacks_with_tf_xla_ops.mlir.test PASSED in 3.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:replace_cast_hacks_with_tf_xla_ops_large_constants.mlir.test PASSED in 18.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:unfreeze_constants.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/quantization/tensorflow/utils:tf_to_uniform_attribute_utils_test PASSED in 1.4s //tensorflow/compiler/mlir/quantization/tensorflow/utils:tf_to_xla_attribute_utils_test PASSED in 39.5s //tensorflow/compiler/mlir/stablehlo:stablehlo_test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow:bridge_logger_test PASSED in 6.7s //tensorflow/compiler/mlir/tensorflow:call_graph_util_test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow:cluster_util_test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow:convert_tensor_test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow:convert_type_test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow:data_dumper_logger_config_test PASSED in 10.7s //tensorflow/compiler/mlir/tensorflow:device_util_test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow:dump_graph_test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow:dump_mlir_util_test PASSED in 22.0s //tensorflow/compiler/mlir/tensorflow:error_util_test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow:tf_saved_model_test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow:tpu_rewrite_device_util_test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow:xla_rewrite_util_test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:add_functions_for_exported_names.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:annotate-parameter-replication.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:batchmatmul_to_einsum.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:breakup-islands.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests:cannonicalize_ops_outside_compilation.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:canonicalize.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:canonicalize_compile_and_replicate_attributes.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests:check_control_dependencies.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:cluster_formation.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:cluster_ops_by_policy.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tensorflow/tests:cluster_outlining.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tensorflow/tests:cluster_tf_ops_pass.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:colocate_tpu_copy_with_dynamic_shape.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:constant-fold.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tensorflow/tests:constant_op_device_assignment.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:convert-tf-control-flow-to-scf.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:convert_control_to_data_outputs.mlir.test PASSED in 3.3s //tensorflow/compiler/mlir/tensorflow/tests:convert_launch_func_to_tf_call.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:convert_session_initializer_to_function.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tensorflow/tests:convert_to_legacy_compile_and_replicate_attributes.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:decompose_reduce_dataset.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:decompose_resource_ops.mlir.test PASSED in 3.6s //tensorflow/compiler/mlir/tensorflow/tests:device_assignment.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:device_assignment_by_func_attr.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:device_attribute_to_launch.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:device_canonicalize.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:device_copy.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:drop_while_shape_invariant.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tensorflow/tests:einsum.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:embedding_pipelining.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:embedding_program_key.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:embedding_sequencing.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:empty-main.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:end-to-end-tpu-reshard-variables.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:executor_canonicalize.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests:executor_island_coarsening.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:executor_island_materialize_const.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:extract_head_tail_outside_compilation.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:extract_outside_compilation.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:extract_tpu_copy_with_dynamic_shape_op.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:fold-broadcast.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:freeze_variables.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:func-attr-invalid.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:func-attr.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests:functional-control-flow-to-cfg.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:functional-control-flow-to-regions.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:functionalize-if-fail.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:functionalize-if.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:fused_kernel_matcher.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:gpu_fusion.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:graph_pruning.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:graph_pruning_preserve_ops.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:group_by_dialect.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tensorflow/tests:guarantee-all-funcs-one-use.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:hoist_broadcast_read.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:hoist_loop_invariant.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:hoist_replicate_invariant_resource_writes.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import_invalid.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import_saved_model.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:inlining.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:isolate-placer.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:launch_outlining.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:launch_to_device_attribute.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:launch_to_device_attribute_legacy.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_gpu_cc_60.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_gpu_cc_70.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_to_nchw.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_to_nhwc.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_move_transposes_begin.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_move_transposes_end.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_to_nchw.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_to_nhwc.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg_arg_control_dep.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg_with_control_flow.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests:localize_var_handles.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests:lower_globals_to_ml_program.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:lower_globals_to_ml_program_invalid.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:lower_quantized.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:lower_tf.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tensorflow/tests:lower_variable_ops_to_ml_program.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:mark_input_output_aliases.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tensorflow/tests:mark_ops_for_outside_compilation.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:materialize_passthrough_op.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:merge_control_flow.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:mlprogram.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:move_tpu_compile_to_front.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:name_anonymous_iterators.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:optimize-arg-operand-constraint.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:optimize.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:order_by_dialect.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:parallel_execute_to_islands.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:parallel_execute_to_islands_legacy.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:prepare_tpu_computation_for_tf_export.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:print.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:promote_resources_to_args.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:promote_resources_to_args_functions.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:promote_var_handles_to_args.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:readonly_references_to_resources.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/tensorflow/tests:region-control-flow-to-functional.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:remove_unused_arguments.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:remove_unused_while_results.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:replica_id_to_device_ordinal.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:replicate_invariant_op_hoisting.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:replicate_tensor_list_init_ops.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:replicate_to_island.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:replicate_to_island_legacy.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:resource-alias-analysis-test.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:resource-device-inference.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:resource_analyzer.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:resource_inlining.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/tensorflow/tests:resource_op_lifting.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/tensorflow/tests:rewrite_tpu_embedding_ops.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:roundtrip-tf-executor.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:shape_inference.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tensorflow/tests:side-effect-analysis-test.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/tensorflow/tests:sink_constant.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:split_into_island_per_op.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:stack_ops_decomposition.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:strip_noinline.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:strip_saved_module_metadata.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:strip_tf_attributes.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:tensor_array_ops_decomposition.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:tensor_list_ops_decomposition.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:tf-executor-to-functional.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:tf-functional-to-executor.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tf-ops.mlir.test PASSED in 5.7s //tensorflow/compiler/mlir/tensorflow/tests:tf-reduce-identity.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_data_fuse_map_and_batch.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_data_fuse_pmap_and_batch.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_index_selector.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_ops.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_ops_invalid.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_invalid.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_location_roundtrip.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_printer.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_side_effect.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_optimize.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_asset_sinking.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_deduplicate_bound_input_bindings.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_assets.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_global_tensors.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_global_tensors_mutable_tensors.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_initialize_variables_in_session_init.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_initialize_variables_in_session_init_fail.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_lift_variables.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_lift_variables_invalid_session.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_mark_initialized_variables.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_ops.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_ops_invalid.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_optimize_global_tensors.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_optimize_global_tensors_interprocedural.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_remove_vars_in_session_initializer.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_side_effect.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:tf_trait_folds.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:tfrt_ops.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-annotate-dynamic-shape-inputs.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:tpu-cluster-cleanup-attributes.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu-dynamic-layout-pass.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:tpu-merge-variables-with-execute.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:tpu-multiple-while-body-func.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-resource-read-for-write.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu-variable-runtime-reformatting.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:tpu_cluster_formation.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/tensorflow/tests:tpu_colocate_composite_resource_ops.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:tpu_colocate_splits.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_device_propagation.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_host_computation_expansion.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:tpu_identity_pruning.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_parallel_execute_sink_resource_write.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests:tpu_partitioned_op_conversion.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:tpu_reorder_replicate_and_partitioned_inputs.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_resource_partitioning.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_rewrite.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_sharding_identification.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu_space_to_depth_pass.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_tail_with_tobool_op.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu_update_embedding_enqueue_op_inputs.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:tpu_validate_inputs.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:transpose-op.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:unroll-batch-matmul.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:update_control_dependencies.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:verify_for_export.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:warn_when_using_deprecated_dumps.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:while_licm.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:xla_broadcast.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:xla_call_module_deserialization.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:xla_call_module_round_trip.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:xla_call_module_serialization.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:xla_cluster_formation.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:xla_inline_device_ops.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:xla_outline_entry_functions.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:xla_rewrite.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:xla_rewrite_v2.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:xla_sharding_util_test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:xla_validate_iputs.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:add.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:argument-sharding-invalid.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:argument-sharding.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:constant-folding-hook.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:constant-folding.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:convert_mhlo_quant_to_int.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph-resource.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph-resource.pbtxt.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:mlir-module-serialized-str-attr.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:replicate-tensor-list-init-ops.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:result-sharding.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:serialized-mlir-module-str-attr-invalid.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:serialized-mlir-module-str-attr.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:shape-inference-after-legalization.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:shape-inference.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:stablehlo_add.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_coarsening:executor_tpuv1_island_coarsening.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_coarsening:while_op.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_inlining:executor_tpuv1_inline_tpu_island.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_inlining:while_op.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:case_op.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:executor_tpuv1_outline_tpu_island.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:while_op.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:add.pbtxt.test PASSED in 2.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-as-fetch.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-control-dep.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-data-type-with-subtype.pbtxt.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-data-type.pbtxt.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-multi-data-type-with-subtype.pbtxt.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-retval-attrs.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:case_op.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:const-values.pbtxt.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:device-arg-retval-attr.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:empty-input-shapes.pbtxt.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:empty-value-attr.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:feed-as-fetch.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:feed-control-dep.pbtxt.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:force_shared_name_for_resource_ops.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:function-func-attr.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:functional-if-ops.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:functional-while-ops.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function-control-ret.pbtxt.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function-retval-of-arg.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-custom-operation.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-default-attr.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-device-retval.pbtxt.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-empty-tensor-content.pbtxt.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-func-attr.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-call.pbtxt.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-control-ret-diff-island.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-control-ret-same-island.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-defs.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-input-shapes.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-name-bug.pbtxt.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-resource-args.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-gradient-def.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-input-func-arg-name-collision.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-library.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-malformed.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-scalar-input.pbtxt.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-uint8-return.pbtxt.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-undefined-output.pbtxt.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-version-info.pbtxt.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-while-loop.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:invalid-output-index.pbtxt.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:legacy-fed-input-without-inputs.pbtxt.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:merge_node_with_function.pbtxt.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:mlir_passthrough_op.pbtxt.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:multi-output-feeds.pbtxt.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:multiple-use-next-iteration.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:node-locations.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:output-shapes-attr.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:output-shapes.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:parse_example.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:parse_example_v2.pbtxt.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:partial-device-name.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:prune_unused_nodes.pbtxt.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:quint8-const.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:shape-attrs.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:stateful-attribute.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:string-attr.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:switch_n.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:target.pbtxt.test PASSED in 2.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:tensor-list.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:tf-data-pipeline.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:unregistered_kernel.pbtxt.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir/batch_use_same_function:saved_model.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graph:convert_tensor.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:aliasing_arg_attr.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:case.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:convert_tensor.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:derived_shape_attr.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:derived_size_attr.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:device-arg-retval-attr.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:export_main_to_flib.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:fetch_feed_names.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:func_attr.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:func_list_attr.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-control-ret.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-order.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-resource-args-handle-info.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-resource-args.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:functional-if-ops.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:functional-while-ops.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:graph-as-function.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:infer_derived_attribute.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:invalid_input.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:legalized_name.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:missing-main.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:noop.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:optional_symbol_ref.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:output-shapes-attr.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:parse_example.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:parse_example_v2.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:preserve-entry-func-names.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:ref-type-attr.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:ref-while-loop.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:shape_list_attr.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:simple.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:simple_tf_dialect_op.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:stringescape.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:switchn.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf-gradient-attr.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf-legacy-call.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_add.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_identity_n.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_tpu_embedding_ops.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:type_attr.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:type_list_attr.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:unique_name.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:unique_output_name.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:while-loop.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/tf_to_hlo_pipeline:sccp-post-shape-inference.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/transforms:verify_no_outside_compilation_markers_pass_test PASSED in 25.5s //tensorflow/compiler/mlir/tensorflow/transforms/host_runtime:lower_cluster_to_runtime_ops_test PASSED in 18.0s //tensorflow/compiler/mlir/tensorflow/transforms/host_runtime:tpu_metadata_utils_test PASSED in 18.9s //tensorflow/compiler/mlir/tensorflow/translate:tf_mlir_translate_registration_test PASSED in 27.4s //tensorflow/compiler/mlir/tf2xla/api/v1:cluster_tf_test PASSED in 42.9s //tensorflow/compiler/mlir/tf2xla/api/v1:compile_mlir_util_test PASSED in 10.0s //tensorflow/compiler/mlir/tf2xla/api/v1:compile_tf_graph_test PASSED in 1.0s //tensorflow/compiler/mlir/tf2xla/api/v1:tf_dialect_to_executor_test PASSED in 25.4s //tensorflow/compiler/mlir/tf2xla/api/v2:cluster_tf_test PASSED in 37.8s //tensorflow/compiler/mlir/tf2xla/api/v2:legalize_tf_test PASSED in 29.1s //tensorflow/compiler/mlir/tf2xla/api/v2:tf_dialect_to_executor_test PASSED in 23.1s //tensorflow/compiler/mlir/tf2xla/internal:clustering_bridge_passes_test PASSED in 7.3s //tensorflow/compiler/mlir/tf2xla/internal:compilation_timer_test PASSED in 0.5s //tensorflow/compiler/mlir/tf2xla/internal:legalize_tf_mlir_test PASSED in 22.0s //tensorflow/compiler/mlir/tf2xla/internal:legalize_tf_to_hlo_test PASSED in 33.5s //tensorflow/compiler/mlir/tf2xla/internal:logging_hooks_test PASSED in 33.2s //tensorflow/compiler/mlir/tf2xla/internal:mlir_bridge_pass_util_test PASSED in 2.2s //tensorflow/compiler/mlir/tf2xla/internal:mlir_pass_instrumentation_test PASSED in 7.5s //tensorflow/compiler/mlir/tf2xla/internal:test_matchers_test PASSED in 8.6s //tensorflow/compiler/mlir/tf2xla/internal/inference:inference_metrics_pass_test PASSED in 16.5s //tensorflow/compiler/mlir/tf2xla/internal/passes:input_metrics_lowering_pass_test PASSED in 16.2s //tensorflow/compiler/mlir/tf2xla/internal/passes:tpu_cluster_formation_test PASSED in 30.9s //tensorflow/compiler/mlir/tf2xla/internal/passes:verify_clustering_pass_test PASSED in 21.3s //tensorflow/compiler/mlir/tf2xla/internal/passes:verify_clustering_pass_test.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tf2xla/internal/passes:verify_input_dialect_to_executor_pass_test.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tf2xla/internal/utils:dialect_detection_utils_test PASSED in 0.7s //tensorflow/compiler/mlir/tf2xla/tests:adjust-layout.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tf2xla/tests:hlo_xla_runtime_pipeline.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-BatchMatMulV2.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-binary-elementwise.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-collective.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-communication.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-include-tf2xla-fallback.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-prefer-tf2xla.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-quant.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-with-tf2xla-hlo-importer.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf.mlir.test PASSED in 20.7s //tensorflow/compiler/mlir/tf2xla/tests:tfxla_device_specific_transformations_cpu.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tf2xla/tests:tfxla_device_specific_transformations_gpu.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tf2xla/tests:verify-tfxla-legalization-no-chlo.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tf2xla/tests:verify-tfxla-legalization.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tf2xla/transforms:legalization_op_config_test PASSED in 38.2s //tensorflow/compiler/mlir/tf2xla/transforms:tf2xla_rewriter_test PASSED in 25.2s //tensorflow/compiler/mlir/tf2xla/transforms:verify_tfxla_legalization_test PASSED in 23.7s //tensorflow/compiler/mlir/tf2xla/transforms:xla_legalize_targets_test PASSED in 1.4s //tensorflow/compiler/mlir/tf2xla/transforms:xla_legalize_tf_test PASSED in 8.2s //tensorflow/compiler/mlir/tfr:graph_decompose_test PASSED in 25.8s //tensorflow/compiler/mlir/tfr:node_expansion_test PASSED in 19.7s //tensorflow/compiler/mlir/tfr:op_reg_gen_test PASSED in 112.6s //tensorflow/compiler/mlir/tfr:tfr_decompose_ctx_test PASSED in 12.8s //tensorflow/compiler/mlir/tfr:tfr_gen_test PASSED in 93.4s //tensorflow/compiler/mlir/tfr/examples/customization:test_ops_test PASSED in 187.2s //tensorflow/compiler/mlir/tfr/examples/mnist:mnist_ops_test PASSED in 136.4s //tensorflow/compiler/mlir/tfr/examples/pad:pad_ops_test PASSED in 123.9s //tensorflow/compiler/mlir/tfrt/tests:batch_function_fallback_resource_variable_as_captured_tensor.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tfrt/tests:batch_function_lowering.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/tfrt/tests:convert_ref_variables.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tfrt/tests:cross_device_transfer.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tfrt/tests:deduplicate_if_results.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tfrt/tests:fuse_tpu_compile_and_execute_ops.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tfrt/tests:hoist_invariant_ops.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tfrt/tests:hoist_invariant_ops_mlrt.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tfrt/tests:optimize.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tfrt/tests:remove_device_attribute.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tfrt/tests:runtime_lowering_gpu.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tfrt/tests:runtime_lowering_tpu.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tfrt/tests:sink_in_invariant_ops.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tfrt/tests:xla_launch_fallback.mlir.test PASSED in 4.0s //tensorflow/compiler/mlir/tfrt/tests:xla_launch_lowering.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tfrt/tests:xla_rewrite.mlir.test PASSED in 2.9s //tensorflow/compiler/mlir/tfrt/tests/analysis:cost_analysis.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tfrt/tests/analysis:tensor_array_side_effect_analysis.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tfrt/tests/analysis:update_op_cost_in_tfrt_mlir_test PASSED in 1.5s //tensorflow/compiler/mlir/tfrt/tests/ifrt:lower_to_ifrt_restore_variable.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/tfrt/tests/ifrt:rewrite_cluster_to_ifrt_call.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/tfrt/tests/ifrt:sink_variable_as_named_array.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/tfrt/tests/ifrt:tf_identity_propagation.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/tfrt/tests/ifrt:tf_restore_merging.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tfrt/tests/ifrt:tf_restore_pruning.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tfrt/tests/ifrt:tf_restore_splitting.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tfrt/tests/ir:fallback_opt.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tfrt/tests/ir:tfrt_fallback_util_test PASSED in 0.9s //tensorflow/compiler/mlir/tfrt/tests/mlrt:assign_op_key.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tfrt/tests/mlrt:async_while.mlir.test PASSED in 3.9s //tensorflow/compiler/mlir/tfrt/tests/mlrt:fuse_mlrt_ops.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tfrt/tests/mlrt:inline.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tfrt/tests/mlrt:parallelization.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tfrt/tests/mlrt:tf_to_mlrt.mlir.test PASSED in 2.5s //tensorflow/compiler/mlir/tfrt/tests/mlrt:tpu_conversions.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tfrt/tests/mlrt:while_to_map_fn.mlir.test PASSED in 3.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:attributes.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:basic.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:batch_function_deduplicate.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:batch_function_deduplicate_failed.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:const_tensor.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:control_flow.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:decompose_resource_op.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:derived_attrs.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:device_conversion.mlir.test PASSED in 3.5s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:errors.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:fallback.mlir.test PASSED in 3.2s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:fallback_canonicalization.mlir.test PASSED in 3.1s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:fallback_inline.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:func_attributes.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:func_attributes_multiple_callers.mlir.test PASSED in 3.4s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:func_use_fallback_tensor.mlir.test PASSED in 2.9s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:insert_fallback_tensor_copy.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:merge_tf_if_ops.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:optimize_tf_control_flow_side_effect.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:remove_tf_if_const_args.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:reorder_assert.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:side_effects.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:tf_to_corert_pipeline.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:tf_to_corert_pipeline_refvar.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:whileop.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tfrt/translate/mlrt:mlir_to_bytecode_test PASSED in 0.8s //tensorflow/compiler/mlir/tools/kernel_gen/tests:buffer_deallocation.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tools/kernel_gen/tests:buffer_reuse.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tools/kernel_gen/tests:bufferize.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tools/kernel_gen/tests:copy_cleanup.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tools/kernel_gen/tests:embed_tf_framework.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tools/kernel_gen/tests:func_to_jit_invocations.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tools/kernel_gen/tests:invalid.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tools/kernel_gen/tests:isinf.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tools/kernel_gen/tests:ops.mlir.test PASSED in 3.8s //tensorflow/compiler/mlir/tools/kernel_gen/tests:parallel_loops_to_sequential.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tools/kernel_gen/tests:rewrite_tf_framework_assert.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_abi_knowledge.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_framework_legalize_to_llvm.mlir.test PASSED in 2.9s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_kernel_gpu_launch_to_llvm.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tosa/tests:convert-tfl-uint8.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tosa/tests:convert_metadata.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tosa/tests:fuse-bias-tf.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tosa/tests:lower-complex-types.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tosa/tests:multi_add.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tosa/tests:retain_call_once_funcs.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tosa/tests:strip-quant-types.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tosa/tests:strip_metadata.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tosa/tests:tf-tfl-to-tosa-pipeline.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tosa/tests:tf-to-tosa-pipeline.mlir.test PASSED in 2.9s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-dequantize_softmax.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-pipeline-filtered.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-pipeline.mlir.test PASSED in 11.7s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-stateful.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tosa/tests:verify_fully_converted.mlir.test PASSED in 1.8s //tensorflow/compiler/tests:adadelta_test_cpu PASSED in 49.3s //tensorflow/compiler/tests:adagrad_da_test_cpu PASSED in 49.5s //tensorflow/compiler/tests:adagrad_test_cpu PASSED in 48.2s //tensorflow/compiler/tests:adam_test_cpu PASSED in 50.1s //tensorflow/compiler/tests:add_n_test_cpu PASSED in 28.5s //tensorflow/compiler/tests:argminmax_test_cpu PASSED in 54.8s //tensorflow/compiler/tests:argminmax_test_cpu_mlir_bridge_test PASSED in 52.7s //tensorflow/compiler/tests:async_comp_test_cpu PASSED in 28.6s //tensorflow/compiler/tests:bincount_op_test_cpu PASSED in 33.9s //tensorflow/compiler/tests:bucketize_op_test_cpu PASSED in 30.0s //tensorflow/compiler/tests:bucketize_op_test_cpu_mlir_bridge_test PASSED in 40.7s //tensorflow/compiler/tests:case_test_cpu PASSED in 35.0s //tensorflow/compiler/tests:cast_ops_test_cpu PASSED in 36.3s //tensorflow/compiler/tests:cast_ops_test_cpu_mlir_bridge_test PASSED in 43.2s //tensorflow/compiler/tests:categorical_op_test_cpu PASSED in 44.4s //tensorflow/compiler/tests:categorical_op_test_cpu_mlir_bridge_test PASSED in 34.6s //tensorflow/compiler/tests:cholesky_op_test_cpu PASSED in 59.0s //tensorflow/compiler/tests:cholesky_op_test_cpu_mlir_bridge_test PASSED in 77.9s //tensorflow/compiler/tests:clustering_test_cpu PASSED in 40.7s //tensorflow/compiler/tests:clustering_test_cpu_mlir_bridge_test PASSED in 35.6s //tensorflow/compiler/tests:concat_ops_test_cpu PASSED in 34.1s //tensorflow/compiler/tests:concat_ops_test_cpu_mlir_bridge_test PASSED in 34.1s //tensorflow/compiler/tests:cond_test_cpu PASSED in 49.1s //tensorflow/compiler/tests:const_arg_test_cpu PASSED in 26.4s //tensorflow/compiler/tests:const_test_cpu PASSED in 33.3s //tensorflow/compiler/tests:data_format_ops_test_cpu PASSED in 64.6s //tensorflow/compiler/tests:data_format_ops_test_cpu_mlir_bridge_test PASSED in 53.1s //tensorflow/compiler/tests:dense_layer_test_cpu PASSED in 42.4s //tensorflow/compiler/tests:dynamic_slice_ops_test_cpu PASSED in 98.1s //tensorflow/compiler/tests:dynamic_slice_ops_test_cpu_mlir_bridge_test PASSED in 41.5s //tensorflow/compiler/tests:dynamic_stitch_test_cpu PASSED in 42.6s //tensorflow/compiler/tests:dynamic_stitch_test_cpu_mlir_bridge_test PASSED in 56.3s //tensorflow/compiler/tests:eager_test_cpu PASSED in 112.7s //tensorflow/compiler/tests:einsum_op_test_cpu PASSED in 43.9s //tensorflow/compiler/tests:einsum_op_test_cpu_mlir_bridge_test PASSED in 78.6s //tensorflow/compiler/tests:ensure_shape_op_test_cpu PASSED in 37.7s //tensorflow/compiler/tests:extract_image_patches_op_test_cpu PASSED in 33.0s //tensorflow/compiler/tests:extract_image_patches_op_test_cpu_mlir_bridge_test PASSED in 61.8s //tensorflow/compiler/tests:fake_quant_ops_test_cpu PASSED in 53.5s //tensorflow/compiler/tests:fake_quant_ops_test_cpu_mlir_bridge_test PASSED in 83.2s //tensorflow/compiler/tests:fifo_queue_test_cpu PASSED in 44.1s //tensorflow/compiler/tests:fifo_queue_test_cpu_mlir_bridge_test PASSED in 43.7s //tensorflow/compiler/tests:ftrl_ops_test_cpu PASSED in 55.6s //tensorflow/compiler/tests:ftrl_ops_test_cpu_mlir_bridge_test PASSED in 38.6s //tensorflow/compiler/tests:function_test_cpu PASSED in 88.0s //tensorflow/compiler/tests:function_test_cpu_mlir_bridge_test PASSED in 64.1s //tensorflow/compiler/tests:gather_nd_op_test_cpu PASSED in 72.4s //tensorflow/compiler/tests:gather_nd_op_test_cpu_mlir_bridge_test PASSED in 112.5s //tensorflow/compiler/tests:gather_test_cpu PASSED in 167.3s //tensorflow/compiler/tests:gather_test_cpu_mlir_bridge_test PASSED in 220.5s //tensorflow/compiler/tests:image_ops_jit_compile_test_cpu PASSED in 56.5s //tensorflow/compiler/tests:jit_test_cpu PASSED in 142.9s //tensorflow/compiler/tests:listdiff_op_test_cpu PASSED in 58.4s //tensorflow/compiler/tests:listdiff_op_test_cpu_mlir_bridge_test PASSED in 58.1s //tensorflow/compiler/tests:lrn_ops_test_cpu PASSED in 39.4s //tensorflow/compiler/tests:lrn_ops_test_cpu_mlir_bridge_test PASSED in 39.3s //tensorflow/compiler/tests:lstm_test_cpu PASSED in 60.2s //tensorflow/compiler/tests:manip_ops_test_cpu PASSED in 101.5s //tensorflow/compiler/tests:manip_ops_test_cpu_mlir_bridge_test PASSED in 100.7s //tensorflow/compiler/tests:matrix_inverse_op_test_cpu PASSED in 83.8s //tensorflow/compiler/tests:matrix_inverse_op_test_cpu_mlir_bridge_test PASSED in 93.6s //tensorflow/compiler/tests:matrix_solve_op_test_cpu PASSED in 47.8s //tensorflow/compiler/tests:matrix_solve_op_test_cpu_mlir_bridge_test PASSED in 71.8s //tensorflow/compiler/tests:momentum_test_cpu PASSED in 49.3s //tensorflow/compiler/tests:nary_ops_test_cpu PASSED in 53.9s //tensorflow/compiler/tests:nary_ops_test_cpu_mlir_bridge_test PASSED in 57.9s //tensorflow/compiler/tests:nullary_ops_test_cpu PASSED in 70.1s //tensorflow/compiler/tests:nullary_ops_test_cpu_mlir_bridge_test PASSED in 39.8s //tensorflow/compiler/tests:placeholder_test_cpu PASSED in 41.2s //tensorflow/compiler/tests:placeholder_test_cpu_mlir_bridge_test PASSED in 51.5s //tensorflow/compiler/tests:proximal_adagrad_test_cpu PASSED in 49.7s //tensorflow/compiler/tests:proximal_gradient_descent_test_cpu PASSED in 80.5s //tensorflow/compiler/tests:quantized_ops_test_cpu PASSED in 58.8s //tensorflow/compiler/tests:reduce_window_test_cpu PASSED in 72.7s //tensorflow/compiler/tests:reduce_window_test_cpu_mlir_bridge_test PASSED in 35.5s //tensorflow/compiler/tests:repeat_op_test_cpu PASSED in 91.0s //tensorflow/compiler/tests:repeat_op_test_cpu_mlir_bridge_test PASSED in 74.5s //tensorflow/compiler/tests:reshape_op_test_cpu PASSED in 43.8s //tensorflow/compiler/tests:reshape_op_test_cpu_mlir_bridge_test PASSED in 35.6s //tensorflow/compiler/tests:reverse_ops_test_cpu PASSED in 83.5s //tensorflow/compiler/tests:reverse_ops_test_cpu_mlir_bridge_test PASSED in 73.7s //tensorflow/compiler/tests:reverse_sequence_op_test_cpu PASSED in 51.2s //tensorflow/compiler/tests:reverse_sequence_op_test_cpu_mlir_bridge_test PASSED in 53.8s //tensorflow/compiler/tests:rmsprop_test_cpu PASSED in 75.7s //tensorflow/compiler/tests:scatter_nd_op_test_cpu PASSED in 68.7s //tensorflow/compiler/tests:scatter_nd_op_test_cpu_mlir_bridge_test PASSED in 102.0s //tensorflow/compiler/tests:searchsorted_op_test_cpu PASSED in 51.3s //tensorflow/compiler/tests:searchsorted_op_test_cpu_mlir_bridge_test PASSED in 59.1s //tensorflow/compiler/tests:segment_reduction_ops_test_cpu PASSED in 111.4s //tensorflow/compiler/tests:segment_reduction_ops_test_cpu_mlir_bridge_test PASSED in 100.9s //tensorflow/compiler/tests:self_adjoint_eig_op_test_cpu PASSED in 64.1s //tensorflow/compiler/tests:self_adjoint_eig_op_test_cpu_mlir_bridge_test PASSED in 46.1s //tensorflow/compiler/tests:slice_ops_test_cpu PASSED in 91.8s //tensorflow/compiler/tests:slice_ops_test_cpu_mlir_bridge_test PASSED in 119.4s //tensorflow/compiler/tests:sparse_to_dense_op_test_cpu PASSED in 58.6s //tensorflow/compiler/tests:sparse_to_dense_op_test_cpu_mlir_bridge_test PASSED in 64.9s //tensorflow/compiler/tests:stack_ops_test_cpu PASSED in 54.3s //tensorflow/compiler/tests:tensor_float_32_test_cpu PASSED in 59.5s //tensorflow/compiler/tests:tensor_float_32_test_cpu_mlir_bridge_test PASSED in 61.8s //tensorflow/compiler/tests:tensor_list_ops_test_cpu PASSED in 42.5s //tensorflow/compiler/tests:tridiagonal_matmul_ops_test_cpu PASSED in 82.7s //tensorflow/compiler/tests:tridiagonal_matmul_ops_test_cpu_mlir_bridge_test PASSED in 134.9s //tensorflow/compiler/tests:tridiagonal_solve_ops_test_cpu PASSED in 61.2s //tensorflow/compiler/tests:tridiagonal_solve_ops_test_cpu_mlir_bridge_test PASSED in 142.2s //tensorflow/compiler/tests:unique_ops_test_cpu PASSED in 47.2s //tensorflow/compiler/tests:variable_ops_test_cpu PASSED in 131.7s //tensorflow/compiler/tests:variable_ops_test_cpu_mlir_bridge_test PASSED in 93.1s //tensorflow/compiler/tests:where_op_test_cpu PASSED in 42.2s //tensorflow/compiler/tests:while_test_cpu PASSED in 86.2s //tensorflow/compiler/tests:xla_call_module_no_platform_check_test_cpu PASSED in 66.1s //tensorflow/compiler/tests:xla_call_module_no_shape_assertions_check_test_cpu PASSED in 118.0s //tensorflow/compiler/tests:xla_call_module_test_cpu PASSED in 43.6s //tensorflow/compiler/tests:xla_custom_call_ops_test_cpu PASSED in 33.8s //tensorflow/compiler/tests:xla_device_gpu_test_cpu PASSED in 25.1s //tensorflow/compiler/tests:xla_device_test_cpu PASSED in 90.5s //tensorflow/compiler/tests:xla_device_test_cpu_mlir_bridge_test PASSED in 109.6s //tensorflow/compiler/tests:xla_dump_to_test_cpu PASSED in 45.9s //tensorflow/compiler/tests:xla_dump_to_test_cpu_mlir_bridge_test PASSED in 76.0s //tensorflow/compiler/tests:xla_ops_test_cpu PASSED in 137.0s //tensorflow/compiler/tests:xla_ops_test_cpu_mlir_bridge_test PASSED in 180.4s //tensorflow/compiler/tests:xla_test_test PASSED in 19.6s //tensorflow/compiler/tf2xla:const_analysis_test PASSED in 12.4s //tensorflow/compiler/tf2xla:cpu_function_runtime_test PASSED in 0.6s //tensorflow/compiler/tf2xla:functionalize_cond_test PASSED in 2.4s //tensorflow/compiler/tf2xla:functionalize_control_flow_test PASSED in 4.1s //tensorflow/compiler/tf2xla:fused_batchnorm_reserve_space_test_cpu PASSED in 32.9s //tensorflow/compiler/tf2xla:graph_compiler_test PASSED in 7.8s //tensorflow/compiler/tf2xla:literal_util_test PASSED in 1.6s //tensorflow/compiler/tf2xla:resource_operation_table_test PASSED in 6.8s //tensorflow/compiler/tf2xla:resource_util_test_cpu PASSED in 3.7s //tensorflow/compiler/tf2xla:sharding_util_test PASSED in 2.6s //tensorflow/compiler/tf2xla:tf2xla_opset_test PASSED in 12.1s //tensorflow/compiler/tf2xla:tf2xla_test PASSED in 25.8s //tensorflow/compiler/tf2xla:tf2xla_util_test PASSED in 1.3s //tensorflow/compiler/tf2xla:type_util_test PASSED in 1.2s //tensorflow/compiler/tf2xla:xla_compiler_test PASSED in 27.2s //tensorflow/compiler/tf2xla:xla_jit_compiled_cpu_function_test PASSED in 30.2s //tensorflow/compiler/tf2xla:xla_op_registry_test PASSED in 7.1s //tensorflow/compiler/tf2xla/kernels:rng_converter_utils_test PASSED in 4.5s //tensorflow/core:@local_tsl__tsl_lib_core_legacy_lib_core_all_tests PASSED in 2.2s //tensorflow/core:__tensorflow_core_lib_core_legacy_lib_core_all_tests PASSED in 33.6s //tensorflow/core:__tensorflow_core_lib_gtl_legacy_lib_gtl_tests PASSED in 0.5s //tensorflow/core:__tensorflow_core_lib_monitoring_cell_reader_test PASSED in 60.8s //tensorflow/core:__tensorflow_core_lib_monitoring_collection_registry_test PASSED in 0.6s //tensorflow/core:__tensorflow_core_lib_monitoring_counter_test PASSED in 0.9s //tensorflow/core:__tensorflow_core_lib_monitoring_gauge_test PASSED in 0.8s //tensorflow/core:__tensorflow_core_lib_monitoring_metric_def_test PASSED in 0.9s //tensorflow/core:__tensorflow_core_lib_monitoring_percentile_sampler_test PASSED in 0.7s //tensorflow/core:__tensorflow_core_lib_monitoring_sampler_test PASSED in 0.8s //tensorflow/core:__tensorflow_core_lib_monitoring_test_utils_test PASSED in 1.1s //tensorflow/core:__tensorflow_core_lib_strings_legacy_low_level_library_tests PASSED in 0.8s //tensorflow/core:__tensorflow_core_lib_wav_wav_io_test PASSED in 0.8s //tensorflow/core:__tensorflow_core_util_mkl_util_test_srcs PASSED in 0.8s //tensorflow/core:lib_strings_ordered_code_test PASSED in 2.1s //tensorflow/core:lib_strings_proto_serialization_test PASSED in 0.4s //tensorflow/core/api_def:api_test PASSED in 21.7s //tensorflow/core/api_def:update_api_def_test PASSED in 1.5s //tensorflow/core/common_runtime:all_to_all_test_cpu PASSED in 1.4s //tensorflow/core/common_runtime:arg_ret_placement_test PASSED in 1.4s //tensorflow/core/common_runtime:buf_rendezvous_test PASSED in 2.6s //tensorflow/core/common_runtime:collective_executor_mgr_test PASSED in 4.9s //tensorflow/core/common_runtime:collective_param_resolver_local_test PASSED in 13.0s //tensorflow/core/common_runtime:collective_rma_local_test PASSED in 4.2s //tensorflow/core/common_runtime:colocate_predecessor_trees_pass_test PASSED in 2.0s //tensorflow/core/common_runtime:composite_device_test PASSED in 3.8s //tensorflow/core/common_runtime:cost_measurement_registry_test PASSED in 3.8s //tensorflow/core/common_runtime:cost_util_test PASSED in 0.6s //tensorflow/core/common_runtime:device_mgr_test PASSED in 2.3s //tensorflow/core/common_runtime:device_propagation_test PASSED in 1.2s //tensorflow/core/common_runtime:device_resolver_local_test PASSED in 2.3s //tensorflow/core/common_runtime:device_set_test PASSED in 3.7s //tensorflow/core/common_runtime:direct_session_test_cpu PASSED in 8.7s //tensorflow/core/common_runtime:direct_session_with_debug_test PASSED in 6.4s //tensorflow/core/common_runtime:direct_session_with_tracking_alloc_test PASSED in 2.7s //tensorflow/core/common_runtime:dynamic_device_mgr_test PASSED in 2.5s //tensorflow/core/common_runtime:eval_const_tensor_test PASSED in 1.5s //tensorflow/core/common_runtime:executor_test PASSED in 5.1s //tensorflow/core/common_runtime:function_optimization_registration_test PASSED in 3.6s //tensorflow/core/common_runtime:function_optimization_registry_no_pass_test PASSED in 2.5s //tensorflow/core/common_runtime:function_optimization_registry_pass_failure_test PASSED in 2.0s //tensorflow/core/common_runtime:function_optimization_registry_test PASSED in 2.0s //tensorflow/core/common_runtime:function_threadpool_test PASSED in 3.3s //tensorflow/core/common_runtime:graph_constructor_test PASSED in 3.7s //tensorflow/core/common_runtime:graph_runner_test PASSED in 4.4s //tensorflow/core/common_runtime:hierarchical_tree_broadcaster_test_cpu PASSED in 9.3s //tensorflow/core/common_runtime:inline_function_utils_test PASSED in 1.2s //tensorflow/core/common_runtime:input_colocation_exemption_registry_test PASSED in 1.3s //tensorflow/core/common_runtime:int32_fulltype_test PASSED in 1.0s //tensorflow/core/common_runtime:isolate_placer_inspection_required_ops_pass_test PASSED in 2.8s //tensorflow/core/common_runtime:lower_case_op_test PASSED in 6.7s //tensorflow/core/common_runtime:lower_function_call_test PASSED in 6.1s //tensorflow/core/common_runtime:lower_functional_ops_test PASSED in 8.5s //tensorflow/core/common_runtime:lower_if_op_test PASSED in 4.4s //tensorflow/core/common_runtime:lower_while_op_test PASSED in 7.4s //tensorflow/core/common_runtime:mkl_cpu_allocator_test PASSED in 0.6s //tensorflow/core/common_runtime:mkl_threadpool_device_test PASSED in 0.7s //tensorflow/core/common_runtime:no_op_cost_measurement_test PASSED in 0.9s //tensorflow/core/common_runtime:null_request_cost_accessor_test PASSED in 0.7s //tensorflow/core/common_runtime:optimization_registry_test PASSED in 3.2s //tensorflow/core/common_runtime:optimize_cross_host_control_deps_test PASSED in 9.0s //tensorflow/core/common_runtime:optimize_function_graph_utils_test PASSED in 2.6s //tensorflow/core/common_runtime:partitioning_utils_test PASSED in 2.6s //tensorflow/core/common_runtime:pending_counts_test PASSED in 1.8s //tensorflow/core/common_runtime:permuter_test_cpu PASSED in 10.6s //tensorflow/core/common_runtime:placer_inspection_required_ops_utils_test PASSED in 2.6s //tensorflow/core/common_runtime:placer_test PASSED in 2.6s //tensorflow/core/common_runtime:process_function_library_runtime_test_cpu PASSED in 2.1s //tensorflow/core/common_runtime:process_util_test PASSED in 0.6s //tensorflow/core/common_runtime:quantize_training_test PASSED in 9.5s //tensorflow/core/common_runtime:rendezvous_util_test PASSED in 0.8s //tensorflow/core/common_runtime:replicate_constants_pass_test PASSED in 2.3s //tensorflow/core/common_runtime:replicate_per_replica_nodes_test PASSED in 1.2s //tensorflow/core/common_runtime:request_cost_accessor_registry_test PASSED in 3.8s //tensorflow/core/common_runtime:request_cost_test PASSED in 0.5s //tensorflow/core/common_runtime:ring_gatherer_test_cpu PASSED in 11.0s //tensorflow/core/common_runtime:ring_reducer_test_cpu PASSED in 10.7s //tensorflow/core/common_runtime:scoped_allocator_mgr_test PASSED in 7.3s //tensorflow/core/common_runtime:session_test PASSED in 4.3s //tensorflow/core/common_runtime:shape_refiner_test PASSED in 1.9s //tensorflow/core/common_runtime:single_threaded_executor_test PASSED in 2.7s //tensorflow/core/common_runtime:threadpool_device_test PASSED in 1.8s //tensorflow/core/common_runtime:type_inference_test PASSED in 7.5s //tensorflow/core/common_runtime/eager:attr_builder_test PASSED in 66.0s //tensorflow/core/common_runtime/eager:context_test PASSED in 26.4s //tensorflow/core/common_runtime/eager:custom_device_test PASSED in 18.3s //tensorflow/core/common_runtime/eager:eager_executor_test PASSED in 18.8s //tensorflow/core/common_runtime/eager:eager_op_rewrite_registry_test PASSED in 2.1s //tensorflow/core/common_runtime/eager:eager_operation_test PASSED in 11.8s //tensorflow/core/common_runtime/eager:execute_node_test PASSED in 18.2s //tensorflow/core/common_runtime/eager:execute_test PASSED in 41.9s //tensorflow/core/common_runtime/eager:kernel_and_device_test PASSED in 3.2s //tensorflow/core/common_runtime/eager:mkl_eager_op_rewrite_test PASSED in 15.9s //tensorflow/core/common_runtime/eager:placement_test PASSED in 22.0s //tensorflow/core/common_runtime/eager:placement_utils_test PASSED in 21.3s //tensorflow/core/common_runtime/eager:summary_optimizer_test PASSED in 0.9s //tensorflow/core/common_runtime/eager:tensor_handle_data_test PASSED in 14.2s //tensorflow/core/common_runtime/eager:tensor_handle_test PASSED in 26.5s //tensorflow/core/common_runtime/gpu:gpu_device_on_non_gpu_machine_test PASSED in 0.9s //tensorflow/core/common_runtime/gpu:gpu_serving_device_selector_test PASSED in 1.2s //tensorflow/core/common_runtime/next_pluggable_device:c_plugin_coordination_service_agent_test PASSED in 5.6s //tensorflow/core/common_runtime/next_pluggable_device/c:plugin_c_api_test PASSED in 51.8s //tensorflow/core/common_runtime/next_pluggable_device/c:tf_rendezvous_c_api_test PASSED in 1.0s //tensorflow/core/config:flags_py_test PASSED in 17.0s //tensorflow/core/config:flags_test PASSED in 1.0s //tensorflow/core/data:compression_utils_test PASSED in 4.2s //tensorflow/core/data:dataset_utils_test PASSED in 1.4s //tensorflow/core/data:hash_utils_test PASSED in 1.4s //tensorflow/core/data:metric_utils_test PASSED in 6.4s //tensorflow/core/data:name_utils_test PASSED in 0.6s //tensorflow/core/data:rewrite_utils_test PASSED in 2.3s //tensorflow/core/data:serialization_utils_test PASSED in 2.6s //tensorflow/core/data:snapshot_utils_test PASSED in 1.5s //tensorflow/core/data:split_utils_test PASSED in 1.6s //tensorflow/core/data:standalone_save_restore_test PASSED in 7.4s //tensorflow/core/data:standalone_test PASSED in 8.3s //tensorflow/core/data:tfdataz_metrics_test PASSED in 6.9s //tensorflow/core/data:unbounded_thread_pool_test PASSED in 2.2s //tensorflow/core/data:utils_test PASSED in 0.5s //tensorflow/core/data/service:auto_scaler_test PASSED in 0.9s //tensorflow/core/data/service:byte_size_test PASSED in 0.8s //tensorflow/core/data/service:common_test PASSED in 0.8s //tensorflow/core/data/service:credentials_factory_test PASSED in 1.8s //tensorflow/core/data/service:cross_trainer_cache_test PASSED in 3.5s //tensorflow/core/data/service:data_service_test PASSED in 25.6s //tensorflow/core/data/service:data_transfer_test PASSED in 1.3s //tensorflow/core/data/service:dataset_store_test PASSED in 2.3s //tensorflow/core/data/service:dispatcher_client_test PASSED in 12.0s //tensorflow/core/data/service:dispatcher_state_test PASSED in 1.0s //tensorflow/core/data/service:graph_rewriters_test PASSED in 1.8s //tensorflow/core/data/service:grpc_dispatcher_impl_test PASSED in 5.2s //tensorflow/core/data/service:grpc_util_test PASSED in 1.2s //tensorflow/core/data/service:grpc_worker_impl_test PASSED in 4.3s //tensorflow/core/data/service:journal_test PASSED in 1.5s //tensorflow/core/data/service:split_provider_test PASSED in 7.2s //tensorflow/core/data/service:task_runner_test PASSED in 6.0s //tensorflow/core/data/service:test_util_test PASSED in 7.4s //tensorflow/core/data/service:url_test PASSED in 1.0s //tensorflow/core/data/service:utils_test PASSED in 1.1s //tensorflow/core/data/service:validate_utils_test PASSED in 0.7s //tensorflow/core/data/service:worker_client_test PASSED in 5.5s //tensorflow/core/data/service:worker_impl_test PASSED in 3.3s //tensorflow/core/data/service/client:data_service_client_test PASSED in 6.6s //tensorflow/core/data/service/client:utils_test PASSED in 9.1s //tensorflow/core/data/service/client:validate_utils_test PASSED in 4.4s //tensorflow/core/data/service/snapshot:distributed_snapshot_test PASSED in 30.9s //tensorflow/core/data/service/snapshot:file_utils_test PASSED in 0.8s //tensorflow/core/data/service/snapshot:parallel_tfrecord_writer_test PASSED in 31.2s //tensorflow/core/data/service/snapshot:path_utils_test PASSED in 0.8s //tensorflow/core/data/service/snapshot:prefetched_split_provider_test PASSED in 72.5s //tensorflow/core/data/service/snapshot:snapshot_chunk_provider_test PASSED in 1.9s //tensorflow/core/data/service/snapshot:snapshot_manager_test PASSED in 7.1s //tensorflow/core/data/service/snapshot:snapshot_split_provider_test PASSED in 1.5s //tensorflow/core/data/service/snapshot:snapshot_stream_writer_checkpoint_test PASSED in 19.0s //tensorflow/core/data/service/snapshot:snapshot_stream_writer_test PASSED in 10.4s //tensorflow/core/data/service/snapshot:utils_test PASSED in 0.8s //tensorflow/core/debug:debug_graph_utils_test PASSED in 1.0s //tensorflow/core/distributed_runtime:call_options_test PASSED in 0.6s //tensorflow/core/distributed_runtime:cluster_function_library_runtime_test PASSED in 7.6s //tensorflow/core/distributed_runtime:collective_param_resolver_distributed_test PASSED in 1.8s //tensorflow/core/distributed_runtime:collective_rma_distributed_test PASSED in 1.7s //tensorflow/core/distributed_runtime:device_resolver_distributed_test PASSED in 1.7s //tensorflow/core/distributed_runtime:message_wrappers_test PASSED in 0.6s //tensorflow/core/distributed_runtime:partial_run_mgr_test PASSED in 1.1s //tensorflow/core/distributed_runtime:recent_request_ids_test PASSED in 0.8s //tensorflow/core/distributed_runtime:request_id_test PASSED in 1.2s //tensorflow/core/distributed_runtime:rpc_collective_executor_mgr_test PASSED in 1.4s //tensorflow/core/distributed_runtime:server_lib_test PASSED in 0.8s //tensorflow/core/distributed_runtime:session_mgr_test PASSED in 2.2s //tensorflow/core/distributed_runtime:tensor_coding_test PASSED in 0.7s //tensorflow/core/distributed_runtime/coordination:coordination_service_barrier_proxy_test PASSED in 3.9s //tensorflow/core/distributed_runtime/eager:eager_service_impl_test PASSED in 30.2s //tensorflow/core/distributed_runtime/eager:remote_mgr_test PASSED in 20.8s //tensorflow/core/distributed_runtime/integration_test:c_api_multi_client_test_cpu PASSED in 51.3s //tensorflow/core/distributed_runtime/integration_test:c_api_recoverable_jobs_test_cpu PASSED in 64.9s //tensorflow/core/distributed_runtime/integration_test:c_api_session_coordination_test_cpu PASSED in 47.6s //tensorflow/core/distributed_runtime/rpc:grpc_tensor_coding_test PASSED in 8.3s //tensorflow/core/distributed_runtime/rpc:grpc_worker_cache_test PASSED in 1.4s //tensorflow/core/distributed_runtime/rpc/eager:grpc_eager_client_test PASSED in 1.4s //tensorflow/core/example:example_parser_configuration_test PASSED in 2.9s //tensorflow/core/example:feature_util_test PASSED in 0.9s //tensorflow/core/framework:allocator_test PASSED in 8.6s //tensorflow/core/framework:attr_value_util_test PASSED in 1.8s //tensorflow/core/framework:batch_util_test PASSED in 3.2s //tensorflow/core/framework:bfloat16_test PASSED in 1.8s //tensorflow/core/framework:common_shape_fns_test PASSED in 2.5s //tensorflow/core/framework:dataset_test PASSED in 1.5s //tensorflow/core/framework:device_base_test PASSED in 2.2s //tensorflow/core/framework:disable_jit_test PASSED in 1.5s //tensorflow/core/framework:framework_op_gen_lib_test PASSED in 0.9s //tensorflow/core/framework:framework_op_segment_test PASSED in 2.6s //tensorflow/core/framework:framework_resource_var_test PASSED in 0.8s //tensorflow/core/framework:framework_run_handler_test PASSED in 4.7s //tensorflow/core/framework:framework_run_handler_util_test PASSED in 13.7s //tensorflow/core/framework:full_type_inference_util_test PASSED in 1.6s //tensorflow/core/framework:full_type_util_test PASSED in 3.3s //tensorflow/core/framework:function_test PASSED in 1.7s //tensorflow/core/framework:graph_def_util_test PASSED in 1.8s //tensorflow/core/framework:graph_to_functiondef_test PASSED in 1.2s //tensorflow/core/framework:kernel_def_builder_test PASSED in 2.4s //tensorflow/core/framework:kernel_def_util_test PASSED in 4.9s //tensorflow/core/framework:memory_types_test PASSED in 3.6s //tensorflow/core/framework:model_test PASSED in 1.9s //tensorflow/core/framework:node_def_builder_test PASSED in 2.8s //tensorflow/core/framework:node_def_util_test PASSED in 1.3s //tensorflow/core/framework:node_properties_test PASSED in 4.6s //tensorflow/core/framework:op_compatibility_test PASSED in 5.1s //tensorflow/core/framework:op_def_builder_test PASSED in 2.1s //tensorflow/core/framework:op_def_util_test PASSED in 1.6s //tensorflow/core/framework:op_kernel_test PASSED in 2.0s //tensorflow/core/framework:op_registration_test PASSED in 2.9s //tensorflow/core/framework:partial_tensor_shape_test PASSED in 2.6s //tensorflow/core/framework:rendezvous_test PASSED in 8.3s //tensorflow/core/framework:resource_handle_test PASSED in 0.7s //tensorflow/core/framework:resource_mgr_test PASSED in 2.9s //tensorflow/core/framework:resource_op_kernel_test PASSED in 2.7s //tensorflow/core/framework:shape_inference_test PASSED in 2.6s //tensorflow/core/framework:shape_inference_testutil_test PASSED in 1.5s //tensorflow/core/framework:tensor_matcher_test PASSED in 3.6s //tensorflow/core/framework:tensor_shape_test PASSED in 13.9s //tensorflow/core/framework:tensor_slice_test PASSED in 1.3s //tensorflow/core/framework:tensor_test PASSED in 60.4s //tensorflow/core/framework:tensor_testutil_test PASSED in 2.5s //tensorflow/core/framework:tensor_util_test PASSED in 5.7s //tensorflow/core/framework:tracking_allocator_test PASSED in 2.2s //tensorflow/core/framework:types_test PASSED in 2.2s //tensorflow/core/framework:variant_op_registry_test PASSED in 39.1s //tensorflow/core/framework:variant_test PASSED in 1.7s //tensorflow/core/framework/registration:registration_test PASSED in 1.2s //tensorflow/core/function/capture:by_ref_capture_test PASSED in 23.0s //tensorflow/core/function/capture:capture_container_test PASSED in 24.1s //tensorflow/core/function/integration_test:side_inputs_manual_api_test PASSED in 51.4s //tensorflow/core/function/integration_test:side_inputs_test PASSED in 54.1s //tensorflow/core/function/polymorphism:function_cache_test PASSED in 20.1s //tensorflow/core/function/polymorphism:function_type_test PASSED in 25.5s //tensorflow/core/function/polymorphism:type_dispatch_test PASSED in 30.8s //tensorflow/core/function/runtime_client:runtime_client_cc_test PASSED in 70.7s //tensorflow/core/function/trace_type:custom_nest_trace_type_test PASSED in 26.6s //tensorflow/core/function/trace_type:default_types_test PASSED in 23.1s //tensorflow/core/function/trace_type:serialization_test PASSED in 29.0s //tensorflow/core/function/trace_type:trace_type_test PASSED in 30.4s //tensorflow/core/graph:algorithm_test PASSED in 3.4s //tensorflow/core/graph:collective_order_test PASSED in 1.0s //tensorflow/core/graph:control_flow_test PASSED in 1.6s //tensorflow/core/graph:costmodel_test PASSED in 1.4s //tensorflow/core/graph:edgeset_test PASSED in 1.6s //tensorflow/core/graph:graph_debug_info_builder_test PASSED in 1.3s //tensorflow/core/graph:graph_def_builder_test PASSED in 1.2s //tensorflow/core/graph:graph_partition_test PASSED in 4.0s //tensorflow/core/graph:graph_test PASSED in 1.4s //tensorflow/core/graph:node_builder_test PASSED in 1.9s //tensorflow/core/graph:optimizer_cse_test PASSED in 4.3s //tensorflow/core/graph:subgraph_test PASSED in 2.9s //tensorflow/core/graph:tensor_id_test PASSED in 3.9s //tensorflow/core/graph:validate_test PASSED in 1.7s //tensorflow/core/graph/regularization:simple_delete_test PASSED in 0.9s //tensorflow/core/graph/regularization:util_test PASSED in 0.7s //tensorflow/core/grappler:graph_topology_view_test PASSED in 0.6s //tensorflow/core/grappler:graph_view_test PASSED in 3.0s //tensorflow/core/grappler:grappler_item_builder_test PASSED in 4.9s //tensorflow/core/grappler:grappler_item_test PASSED in 3.0s //tensorflow/core/grappler:mutable_graph_view_test PASSED in 9.8s //tensorflow/core/grappler:utils_test PASSED in 5.0s //tensorflow/core/grappler/clusters:single_machine_test PASSED in 34.0s //tensorflow/core/grappler/clusters:virtual_cluster_test PASSED in 4.2s //tensorflow/core/grappler/costs:analytical_cost_estimator_test PASSED in 3.6s //tensorflow/core/grappler/costs:cost_estimator_test PASSED in 0.7s //tensorflow/core/grappler/costs:graph_memory_test PASSED in 4.0s //tensorflow/core/grappler/costs:graph_properties_test PASSED in 14.7s //tensorflow/core/grappler/costs:robust_stats_test PASSED in 2.0s //tensorflow/core/grappler/costs:utils_test PASSED in 3.5s //tensorflow/core/grappler/costs:virtual_placer_test PASSED in 1.2s //tensorflow/core/grappler/costs:virtual_scheduler_test PASSED in 4.3s //tensorflow/core/grappler/graph_analyzer:gen_node_test PASSED in 5.1s //tensorflow/core/grappler/graph_analyzer:graph_analyzer_test PASSED in 4.3s //tensorflow/core/grappler/graph_analyzer:hash_tools_test PASSED in 4.7s //tensorflow/core/grappler/graph_analyzer:sig_node_test PASSED in 8.6s //tensorflow/core/grappler/graph_analyzer:subgraph_test PASSED in 6.2s //tensorflow/core/grappler/inputs:utils_test PASSED in 1.0s //tensorflow/core/grappler/optimizers:arithmetic_optimizer_test_cpu PASSED in 13.4s //tensorflow/core/grappler/optimizers:auto_mixed_precision_test_cpu PASSED in 7.4s //tensorflow/core/grappler/optimizers:auto_parallel_test_cpu PASSED in 5.7s //tensorflow/core/grappler/optimizers:common_subgraph_elimination_test_cpu PASSED in 4.3s //tensorflow/core/grappler/optimizers:custom_graph_optimizer_registry_test_cpu PASSED in 9.0s //tensorflow/core/grappler/optimizers:debug_stripper_test_cpu PASSED in 5.1s //tensorflow/core/grappler/optimizers:dependency_optimizer_test_cpu PASSED in 4.7s //tensorflow/core/grappler/optimizers:evaluation_utils_test PASSED in 1.5s //tensorflow/core/grappler/optimizers:function_api_info_test PASSED in 0.5s //tensorflow/core/grappler/optimizers:function_optimizer_test_cpu PASSED in 11.4s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_test_cpu PASSED in 9.0s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_transposer_factory_test PASSED in 0.5s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_transposer_test_cpu PASSED in 6.1s //tensorflow/core/grappler/optimizers:graph_optimizer_stage_test_cpu PASSED in 5.7s //tensorflow/core/grappler/optimizers:implementation_selector_test PASSED in 10.0s //tensorflow/core/grappler/optimizers:loop_optimizer_test_cpu PASSED in 3.3s //tensorflow/core/grappler/optimizers:memory_optimizer_test_cpu PASSED in 4.8s //tensorflow/core/grappler/optimizers:meta_optimizer_test_cpu PASSED in 15.2s //tensorflow/core/grappler/optimizers:mkl_remapper_test PASSED in 7.7s //tensorflow/core/grappler/optimizers:model_pruner_test_cpu PASSED in 5.1s //tensorflow/core/grappler/optimizers:pin_to_host_optimizer_test_cpu PASSED in 4.9s //tensorflow/core/grappler/optimizers:remapper_test_cpu PASSED in 26.0s //tensorflow/core/grappler/optimizers:scoped_allocator_optimizer_test PASSED in 7.3s //tensorflow/core/grappler/optimizers:shape_optimizer_test_cpu PASSED in 5.9s //tensorflow/core/grappler/optimizers:static_schedule_test_cpu PASSED in 4.6s //tensorflow/core/grappler/optimizers:tfg_optimizer_hook_test PASSED in 1.4s //tensorflow/core/grappler/optimizers/data:auto_shard_test PASSED in 1.7s //tensorflow/core/grappler/optimizers/data:autotune_buffer_sizes_test PASSED in 1.4s //tensorflow/core/grappler/optimizers/data:batch_parallelization_test PASSED in 1.3s //tensorflow/core/grappler/optimizers/data:disable_intra_op_parallelism_test PASSED in 1.0s //tensorflow/core/grappler/optimizers/data:disable_prefetch_legacy_autotune_test PASSED in 1.3s //tensorflow/core/grappler/optimizers/data:enable_gradient_descent_test PASSED in 2.3s //tensorflow/core/grappler/optimizers/data:filter_fusion_test PASSED in 1.5s //tensorflow/core/grappler/optimizers/data:filter_parallelization_test PASSED in 1.0s //tensorflow/core/grappler/optimizers/data:function_utils_test PASSED in 1.1s //tensorflow/core/grappler/optimizers/data:fusion_utils_test PASSED in 1.8s //tensorflow/core/grappler/optimizers/data:graph_utils_test PASSED in 1.4s //tensorflow/core/grappler/optimizers/data:inject_io_prefetch_test PASSED in 1.4s //tensorflow/core/grappler/optimizers/data:inject_prefetch_test PASSED in 1.6s //tensorflow/core/grappler/optimizers/data:make_deterministic_test PASSED in 1.3s //tensorflow/core/grappler/optimizers/data:make_sloppy_test PASSED in 1.7s //tensorflow/core/grappler/optimizers/data:map_and_batch_fusion_test PASSED in 1.8s //tensorflow/core/grappler/optimizers/data:map_and_filter_fusion_test PASSED in 1.3s //tensorflow/core/grappler/optimizers/data:map_fusion_test PASSED in 1.9s //tensorflow/core/grappler/optimizers/data:map_parallelization_test PASSED in 2.8s //tensorflow/core/grappler/optimizers/data:noop_elimination_test PASSED in 1.4s //tensorflow/core/grappler/optimizers/data:parallel_batch_test PASSED in 2.0s //tensorflow/core/grappler/optimizers/data:remove_compression_map_test PASSED in 1.3s //tensorflow/core/grappler/optimizers/data:replicate_on_split_test PASSED in 2.3s //tensorflow/core/grappler/optimizers/data:seq_interleave_prefetch_test PASSED in 2.5s //tensorflow/core/grappler/optimizers/data:shuffle_and_repeat_fusion_test PASSED in 1.0s //tensorflow/core/grappler/optimizers/data:slack_test PASSED in 3.4s //tensorflow/core/grappler/optimizers/data:split_utils_test PASSED in 3.4s //tensorflow/core/grappler/optimizers/data:use_private_thread_pool_test PASSED in 1.1s //tensorflow/core/grappler/optimizers/inference:batch_op_rewriter_test PASSED in 1.3s //tensorflow/core/grappler/utils:canonicalizer_test PASSED in 4.7s //tensorflow/core/grappler/utils:colocation_test PASSED in 2.3s //tensorflow/core/grappler/utils:frame_test PASSED in 0.6s //tensorflow/core/grappler/utils:functions_test PASSED in 3.9s //tensorflow/core/grappler/utils:graph_view_internal_test PASSED in 1.6s //tensorflow/core/grappler/utils:graph_view_test PASSED in 5.5s //tensorflow/core/grappler/utils:grappler_test_test PASSED in 16.7s //tensorflow/core/grappler/utils:pattern_utils_test PASSED in 2.0s //tensorflow/core/grappler/utils:scc_test PASSED in 3.0s //tensorflow/core/grappler/utils:symbolic_shapes_test PASSED in 0.7s //tensorflow/core/grappler/utils:topological_sort_test PASSED in 1.0s //tensorflow/core/grappler/utils:tpu_test PASSED in 0.7s //tensorflow/core/grappler/utils:transitive_fanin_test PASSED in 2.0s //tensorflow/core/grappler/utils:traversal_test PASSED in 1.1s //tensorflow/core/grappler/verifiers:structure_verifier_test PASSED in 4.6s //tensorflow/core/ir:interfaces_test PASSED in 0.8s //tensorflow/core/ir:ops_test PASSED in 1.2s //tensorflow/core/ir:shape_inference_utils_test PASSED in 1.1s //tensorflow/core/ir:tf_op_registry_test PASSED in 1.6s //tensorflow/core/ir:tf_op_wrapper_test PASSED in 0.9s //tensorflow/core/ir:utility_test PASSED in 0.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:arg_as_control_ret.pbtxt.test PASSED in 3.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:backedge_segment.pbtxt.test PASSED in 3.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:empty.pbtxt.test PASSED in 1.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:error_during_backedge.pbtxt.test PASSED in 3.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_case_with_attr_inference.pbtxt.test PASSED in 3.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_if_with_attr_inference.pbtxt.test PASSED in 3.3s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_iterator_get_next_attr_inference.pbtxt.test PASSED in 3.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_underscore_output_shapes.pbtxt.test PASSED in 3.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_while_with_attr_inference.pbtxt.test PASSED in 1.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infeed_dequeue.pbtxt.test PASSED in 3.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infer_arg_handle_type.pbtxt.test PASSED in 3.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infer_with_output_shapes.pbtxt.test PASSED in 2.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_arg_name.pbtxt.test PASSED in 2.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_backedge_input_size.pbtxt.test PASSED in 2.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_duplicated_node_name.pbtxt.test PASSED in 4.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_edge_index.pbtxt.test PASSED in 2.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_edge_name.pbtxt.test PASSED in 2.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_attr_key.pbtxt.test PASSED in 2.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_func_attr_key.pbtxt.test PASSED in 4.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_func_attr_name.pbtxt.test PASSED in 2.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_op_type.pbtxt.test PASSED in 2.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_func_with_empty_name.pbtxt.test PASSED in 1.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_function_import.pbtxt.test PASSED in 3.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_control_result.pbtxt.test PASSED in 2.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_input.pbtxt.test PASSED in 2.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_name.pbtxt.test PASSED in 3.3s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_result.pbtxt.test PASSED in 3.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_function_attr_name.pbtxt.test PASSED in 3.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_function_named_edge_index.pbtxt.test PASSED in 3.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_handle_data.pbtxt.test PASSED in 1.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_input.pbtxt.test PASSED in 2.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_result.pbtxt.test PASSED in 3.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_result_value.pbtxt.test PASSED in 2.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_data_result.pbtxt.test PASSED in 2.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_data_result_value.pbtxt.test PASSED in 1.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_input.pbtxt.test PASSED in 1.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_two_inputs.pbtxt.test PASSED in 1.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_named_edge_index.pbtxt.test PASSED in 2.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_op_name.pbtxt.test PASSED in 2.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_type_list.pbtxt.test PASSED in 1.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:legacy_call.pbtxt.test PASSED in 2.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:negative_shape.pbtxt.test PASSED in 2.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:negative_zero_constant.pbtxt.test PASSED in 1.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:three_nodes_with_attrs.pbtxt.test PASSED in 1.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:version.pbtxt.test PASSED in 1.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:empty.mlir.test PASSED in 1.4s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:fulltype.mlir.test PASSED in 5.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:func_with_no_args_or_results.mlir.test PASSED in 1.7s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:negative_zero_constant.mlir.test PASSED in 2.0s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:nested_legacy_call.mlir.test PASSED in 1.7s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:three_nodes_with_attrs.mlir.test PASSED in 2.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:version.mlir.test PASSED in 2.0s //tensorflow/core/ir/importexport/tests/saved_model:saved_model_roundtrip_test PASSED in 0.8s //tensorflow/core/ir/tests:attributes.mlir.test PASSED in 2.7s //tensorflow/core/ir/tests:canonicalize.mlir.test PASSED in 2.0s //tensorflow/core/ir/tests:compatible_types.mlir.test PASSED in 3.5s //tensorflow/core/ir/tests:concrete-ops.mlir.test PASSED in 2.1s //tensorflow/core/ir/tests:generic_concrete_ops.mlir.test PASSED in 2.0s //tensorflow/core/ir/tests:invalid-concrete-ops.mlir.test PASSED in 2.2s //tensorflow/core/ir/tests:invalid-preserved-attrs.mlir.test PASSED in 1.8s //tensorflow/core/ir/tests:invalid.mlir.test PASSED in 1.8s //tensorflow/core/ir/tests:invalid_types.mlir.test PASSED in 2.5s //tensorflow/core/ir/tests:ops.mlir.test PASSED in 3.0s //tensorflow/core/ir/tests:region-invalid-ops.mlir.test PASSED in 2.9s //tensorflow/core/ir/tests:region-ops-graph.mlir.test PASSED in 2.7s //tensorflow/core/ir/tests:region-ops.mlir.test PASSED in 3.7s //tensorflow/core/ir/tests:types.mlir.test PASSED in 2.9s //tensorflow/core/ir/types:dialect_test PASSED in 0.9s //tensorflow/core/kernels:as_string_op_test PASSED in 1.7s //tensorflow/core/kernels:basic_ops_benchmark_test PASSED in 1.8s //tensorflow/core/kernels:batch_kernels_auto_warmup_test PASSED in 5.6s //tensorflow/core/kernels:batch_kernels_env_test PASSED in 1.1s //tensorflow/core/kernels:batch_kernels_test PASSED in 36.3s //tensorflow/core/kernels:bias_op_test PASSED in 2.0s //tensorflow/core/kernels:bincount_op_test_cpu PASSED in 1.5s //tensorflow/core/kernels:broadcast_to_op_test_cpu PASSED in 2.1s //tensorflow/core/kernels:cast_op_test_cpu PASSED in 11.2s //tensorflow/core/kernels:checkpoint_callback_manager_test PASSED in 1.3s //tensorflow/core/kernels:clustering_ops_test PASSED in 2.3s //tensorflow/core/kernels:composite_tensor_variant_test PASSED in 2.9s //tensorflow/core/kernels:concat_op_test PASSED in 1.4s //tensorflow/core/kernels:constant_op_test_cpu PASSED in 1.2s //tensorflow/core/kernels:control_flow_ops_test PASSED in 16.7s //tensorflow/core/kernels:conv_grad_filter_ops_benchmark_test_cpu PASSED in 2.0s //tensorflow/core/kernels:conv_grad_input_ops_benchmark_test_cpu PASSED in 1.4s //tensorflow/core/kernels:conv_ops_benchmark_test_cpu PASSED in 2.0s //tensorflow/core/kernels:conv_ops_test_cpu PASSED in 14.8s //tensorflow/core/kernels:count_ops_test PASSED in 1.4s //tensorflow/core/kernels:cross_op_test PASSED in 1.7s //tensorflow/core/kernels:cwise_ops_test_cpu PASSED in 1.8s //tensorflow/core/kernels:debug_ops_test PASSED in 2.0s //tensorflow/core/kernels:decode_wav_op_test PASSED in 7.1s //tensorflow/core/kernels:deep_conv2d_test PASSED in 1.3s //tensorflow/core/kernels:dequantize_op_test PASSED in 1.9s //tensorflow/core/kernels:diag_op_test_cpu PASSED in 1.1s //tensorflow/core/kernels:dynamic_partition_op_test_cpu PASSED in 1.2s //tensorflow/core/kernels:dynamic_stitch_op_test_cpu PASSED in 1.6s //tensorflow/core/kernels:eigen_activations_test PASSED in 0.8s //tensorflow/core/kernels:eigen_attention_test PASSED in 0.7s //tensorflow/core/kernels:eigen_backward_cuboid_convolutions_test PASSED in 1.1s //tensorflow/core/kernels:eigen_backward_spatial_convolutions_test PASSED in 0.8s //tensorflow/core/kernels:eigen_benchmark_cpu_test PASSED in 0.7s //tensorflow/core/kernels:eigen_mkldnn_contraction_kernel_test PASSED in 0.7s //tensorflow/core/kernels:eigen_pooling_test PASSED in 0.8s //tensorflow/core/kernels:encode_wav_op_test PASSED in 4.3s //tensorflow/core/kernels:fingerprint_op_test PASSED in 2.8s //tensorflow/core/kernels:fused_batch_norm_ex_op_test_cpu PASSED in 2.2s //tensorflow/core/kernels:fused_batch_norm_op_test_cpu PASSED in 2.5s //tensorflow/core/kernels:gather_nd_op_test_cpu PASSED in 1.4s //tensorflow/core/kernels:gather_op_test_cpu PASSED in 1.4s //tensorflow/core/kernels:guarantee_const_op_test PASSED in 1.5s //tensorflow/core/kernels:identity_n_op_test PASSED in 1.6s //tensorflow/core/kernels:identity_op_test PASSED in 1.3s //tensorflow/core/kernels:immutable_constant_op_test PASSED in 2.2s //tensorflow/core/kernels:in_topk_op_test PASSED in 1.3s //tensorflow/core/kernels:isotonic_regression_op_test PASSED in 1.1s //tensorflow/core/kernels:logging_ops_test PASSED in 2.3s //tensorflow/core/kernels:lookup_ops_test PASSED in 1.3s //tensorflow/core/kernels:loss_test PASSED in 0.6s //tensorflow/core/kernels:lrn_op_test_cpu PASSED in 1.6s //tensorflow/core/kernels:merge_v2_checkpoints_op_test PASSED in 2.3s //tensorflow/core/kernels:mfcc_dct_test PASSED in 0.6s //tensorflow/core/kernels:mfcc_mel_filterbank_test PASSED in 0.7s //tensorflow/core/kernels:mfcc_op_test_cpu PASSED in 5.9s //tensorflow/core/kernels:mfcc_test PASSED in 0.6s //tensorflow/core/kernels:multinomial_op_test_cpu PASSED in 1.9s //tensorflow/core/kernels:nn_ops_test_cpu PASSED in 1.8s //tensorflow/core/kernels:one_hot_op_test PASSED in 2.1s //tensorflow/core/kernels:ops_testutil_test PASSED in 1.7s //tensorflow/core/kernels:ops_util_test PASSED in 0.8s //tensorflow/core/kernels:parameterized_truncated_normal_op_test_cpu PASSED in 1.0s //tensorflow/core/kernels:parse_tensor_test PASSED in 1.4s //tensorflow/core/kernels:quantization_utils_test PASSED in 1.4s //tensorflow/core/kernels:quantize_and_dequantize_op_test_cpu PASSED in 2.2s //tensorflow/core/kernels:quantize_down_and_shrink_range_op_test PASSED in 2.8s //tensorflow/core/kernels:quantize_op_test PASSED in 2.5s //tensorflow/core/kernels:quantized_activation_ops_test PASSED in 1.3s //tensorflow/core/kernels:quantized_add_op_test PASSED in 2.4s //tensorflow/core/kernels:quantized_batch_norm_op_test PASSED in 2.1s //tensorflow/core/kernels:quantized_bias_add_op_test PASSED in 1.5s //tensorflow/core/kernels:quantized_concat_op_test PASSED in 2.7s //tensorflow/core/kernels:quantized_conv_ops_test PASSED in 1.5s //tensorflow/core/kernels:quantized_instance_norm_test PASSED in 4.0s //tensorflow/core/kernels:quantized_matmul_op_test PASSED in 2.0s //tensorflow/core/kernels:quantized_mul_op_test PASSED in 4.7s //tensorflow/core/kernels:quantized_pooling_ops_test PASSED in 3.3s //tensorflow/core/kernels:quantized_reshape_op_test PASSED in 1.3s //tensorflow/core/kernels:quantized_resize_bilinear_op_test PASSED in 3.9s //tensorflow/core/kernels:ragged_fill_empty_rows_op_test PASSED in 1.8s //tensorflow/core/kernels:ragged_gather_op_test PASSED in 1.7s //tensorflow/core/kernels:ragged_range_op_test PASSED in 2.1s //tensorflow/core/kernels:ragged_tensor_from_variant_op_test PASSED in 2.0s //tensorflow/core/kernels:ragged_tensor_to_sparse_kernel_test PASSED in 1.5s //tensorflow/core/kernels:ragged_tensor_to_tensor_op_test PASSED in 1.6s //tensorflow/core/kernels:ragged_tensor_to_variant_op_test PASSED in 2.5s //tensorflow/core/kernels:random_binomial_op_test_cpu PASSED in 3.2s //tensorflow/core/kernels:random_index_shuffle_test PASSED in 0.7s //tensorflow/core/kernels:random_op_test_cpu PASSED in 1.9s //tensorflow/core/kernels:random_poisson_op_test_cpu PASSED in 1.0s //tensorflow/core/kernels:range_sampler_test PASSED in 1.1s //tensorflow/core/kernels:reduction_ops_test_cpu PASSED in 1.2s //tensorflow/core/kernels:regex_replace_op_test PASSED in 1.9s //tensorflow/core/kernels:requantization_range_op_test PASSED in 1.5s //tensorflow/core/kernels:requantize_op_test PASSED in 2.5s //tensorflow/core/kernels:resource_ops_test PASSED in 1.6s //tensorflow/core/kernels:restore_op_test PASSED in 3.1s //tensorflow/core/kernels:restore_v2_op_test PASSED in 2.3s //tensorflow/core/kernels:reverse_op_test PASSED in 2.7s //tensorflow/core/kernels:roll_op_test PASSED in 1.7s //tensorflow/core/kernels:save_op_test PASSED in 1.3s //tensorflow/core/kernels:save_v2_op_test PASSED in 1.2s //tensorflow/core/kernels:scan_ops_test_cpu PASSED in 2.0s //tensorflow/core/kernels:scatter_nd_op_test_cpu PASSED in 1.9s //tensorflow/core/kernels:scatter_op_test PASSED in 1.6s //tensorflow/core/kernels:scoped_allocator_ops_test_cpu PASSED in 15.6s //tensorflow/core/kernels:sdca_ops_test PASSED in 3.2s //tensorflow/core/kernels:segment_reduction_ops_test PASSED in 1.0s //tensorflow/core/kernels:sendrecv_ops_test PASSED in 1.1s //tensorflow/core/kernels:sequence_ops_test PASSED in 2.0s //tensorflow/core/kernels:shape_ops_test PASSED in 1.3s //tensorflow/core/kernels:slice_op_test PASSED in 1.2s //tensorflow/core/kernels:spacetobatch_benchmark_test_cpu PASSED in 1.5s //tensorflow/core/kernels:sparse_add_op_test PASSED in 1.4s //tensorflow/core/kernels:sparse_dense_binary_op_shared_test PASSED in 1.5s //tensorflow/core/kernels:sparse_fill_empty_rows_op_test_cpu PASSED in 1.3s //tensorflow/core/kernels:sparse_matmul_op_test_cpu PASSED in 1.4s //tensorflow/core/kernels:sparse_reduce_sum_op_test PASSED in 1.4s //tensorflow/core/kernels:sparse_tensor_dense_matmul_op_test_cpu PASSED in 0.9s //tensorflow/core/kernels:sparse_to_dense_op_test_cpu PASSED in 1.3s //tensorflow/core/kernels:sparse_utils_test PASSED in 1.7s //tensorflow/core/kernels:sparse_xent_op_test_cpu PASSED in 2.0s //tensorflow/core/kernels:spectrogram_op_test_cpu PASSED in 6.4s //tensorflow/core/kernels:spectrogram_test PASSED in 1.1s //tensorflow/core/kernels:split_op_test_cpu PASSED in 1.8s //tensorflow/core/kernels:split_v_op_test_cpu PASSED in 1.6s //tensorflow/core/kernels:strided_slice_op_test PASSED in 1.4s //tensorflow/core/kernels:string_format_op_test PASSED in 1.7s //tensorflow/core/kernels:string_ngrams_op_test PASSED in 2.2s //tensorflow/core/kernels:string_split_op_test PASSED in 1.6s //tensorflow/core/kernels:substr_op_test PASSED in 1.2s //tensorflow/core/kernels:summary_audio_op_test PASSED in 1.7s //tensorflow/core/kernels:summary_image_op_test PASSED in 1.6s //tensorflow/core/kernels:summary_op_test PASSED in 1.8s //tensorflow/core/kernels:summary_tensor_op_test PASSED in 1.7s //tensorflow/core/kernels:tensor_cord_test PASSED in 0.6s //tensorflow/core/kernels:tensor_flag_utils_test PASSED in 0.8s //tensorflow/core/kernels:tensor_map_test PASSED in 0.8s //tensorflow/core/kernels:training_ops_test PASSED in 1.3s //tensorflow/core/kernels:transpose_util_test PASSED in 1.5s //tensorflow/core/kernels:unary_ops_composition_test_cpu PASSED in 4.7s //tensorflow/core/kernels:unique_op_test PASSED in 1.4s //tensorflow/core/kernels:variable_ops_test PASSED in 4.8s //tensorflow/core/kernels:while_op_test PASSED in 3.6s //tensorflow/core/kernels:xent_op_test_cpu PASSED in 1.4s //tensorflow/core/kernels/batching_util:basic_batch_scheduler_test PASSED in 0.9s //tensorflow/core/kernels/batching_util:batch_input_task_test PASSED in 1.8s //tensorflow/core/kernels/batching_util:batch_resource_base_test PASSED in 0.5s //tensorflow/core/kernels/batching_util:batch_scheduler_test PASSED in 0.7s //tensorflow/core/kernels/batching_util:bounded_executor_test PASSED in 20.8s //tensorflow/core/kernels/batching_util:input_split_metadata_test PASSED in 0.4s //tensorflow/core/kernels/batching_util:periodic_function_test PASSED in 3.8s //tensorflow/core/kernels/batching_util:serial_device_batch_scheduler_test PASSED in 3.9s //tensorflow/core/kernels/batching_util:shared_batch_scheduler_test PASSED in 15.8s //tensorflow/core/kernels/batching_util:threadsafe_status_test PASSED in 0.6s //tensorflow/core/kernels/data:batch_dataset_op_test PASSED in 2.5s //tensorflow/core/kernels/data:cache_dataset_ops_test PASSED in 3.0s //tensorflow/core/kernels/data:concatenate_dataset_op_test PASSED in 1.8s //tensorflow/core/kernels/data:filter_dataset_op_test PASSED in 2.1s //tensorflow/core/kernels/data:finalize_dataset_op_test PASSED in 2.6s //tensorflow/core/kernels/data:fixed_length_record_dataset_op_test PASSED in 2.7s //tensorflow/core/kernels/data:flat_map_dataset_op_test PASSED in 2.8s //tensorflow/core/kernels/data:get_options_op_test PASSED in 1.3s //tensorflow/core/kernels/data:interleave_dataset_op_test PASSED in 3.5s //tensorflow/core/kernels/data:iterator_ops_test PASSED in 2.1s //tensorflow/core/kernels/data:map_dataset_op_test PASSED in 2.3s //tensorflow/core/kernels/data:map_defun_op_test PASSED in 1.5s //tensorflow/core/kernels/data:optimize_dataset_op_test PASSED in 1.7s //tensorflow/core/kernels/data:options_dataset_op_test PASSED in 1.9s //tensorflow/core/kernels/data:padded_batch_dataset_op_test PASSED in 2.7s //tensorflow/core/kernels/data:parallel_batch_dataset_op_test PASSED in 4.0s //tensorflow/core/kernels/data:parallel_filter_dataset_op_test PASSED in 4.7s //tensorflow/core/kernels/data:parallel_interleave_dataset_op_test PASSED in 8.1s //tensorflow/core/kernels/data:parallel_map_dataset_op_test PASSED in 3.7s //tensorflow/core/kernels/data:prefetch_autotuner_test PASSED in 1.1s //tensorflow/core/kernels/data:prefetch_dataset_op_test PASSED in 3.0s //tensorflow/core/kernels/data:range_dataset_op_test PASSED in 2.5s //tensorflow/core/kernels/data:reduce_dataset_op_test PASSED in 2.7s //tensorflow/core/kernels/data:repeat_dataset_op_test PASSED in 3.9s //tensorflow/core/kernels/data:rewrite_dataset_op_test PASSED in 1.9s //tensorflow/core/kernels/data:shard_dataset_op_test PASSED in 2.2s //tensorflow/core/kernels/data:shuffle_dataset_op_test PASSED in 2.8s //tensorflow/core/kernels/data:skip_dataset_op_test PASSED in 2.4s //tensorflow/core/kernels/data:sparse_tensor_slice_dataset_op_test PASSED in 2.9s //tensorflow/core/kernels/data:take_dataset_op_test PASSED in 2.9s //tensorflow/core/kernels/data:tensor_dataset_op_test PASSED in 2.1s //tensorflow/core/kernels/data:tensor_slice_dataset_op_test PASSED in 2.0s //tensorflow/core/kernels/data:text_line_dataset_op_test PASSED in 2.3s //tensorflow/core/kernels/data:tf_record_dataset_op_test PASSED in 2.3s //tensorflow/core/kernels/data:window_dataset_op_test PASSED in 3.7s //tensorflow/core/kernels/data:zip_dataset_op_test PASSED in 3.1s //tensorflow/core/kernels/data/experimental:assert_next_dataset_op_test PASSED in 2.2s //tensorflow/core/kernels/data/experimental:assert_prev_dataset_op_test PASSED in 2.4s //tensorflow/core/kernels/data/experimental:auto_shard_dataset_op_test PASSED in 2.2s //tensorflow/core/kernels/data/experimental:directed_interleave_dataset_op_test PASSED in 2.6s //tensorflow/core/kernels/data/experimental:list_dataset_op_test PASSED in 2.2s //tensorflow/core/kernels/data/experimental:map_and_batch_dataset_op_test PASSED in 3.0s //tensorflow/core/kernels/data/experimental:parallel_interleave_dataset_op_test PASSED in 4.1s //tensorflow/core/kernels/data/experimental:random_dataset_op_test PASSED in 2.2s //tensorflow/core/kernels/data/experimental:sampling_dataset_op_test PASSED in 1.9s //tensorflow/core/kernels/data/experimental:save_dataset_op_test PASSED in 2.1s //tensorflow/core/kernels/data/experimental:unique_dataset_op_test PASSED in 2.3s //tensorflow/core/kernels/image:adjust_contrast_op_benchmark_test_cpu PASSED in 1.5s //tensorflow/core/kernels/image:adjust_contrast_op_test PASSED in 1.7s //tensorflow/core/kernels/image:colorspace_op_test PASSED in 1.7s //tensorflow/core/kernels/image:crop_and_resize_op_benchmark_test_cpu PASSED in 0.8s //tensorflow/core/kernels/image:crop_and_resize_op_test PASSED in 2.6s //tensorflow/core/kernels/image:encode_jpeg_op_test PASSED in 1.7s //tensorflow/core/kernels/image:mirror_pad_op_benchmark_test_cpu PASSED in 1.1s //tensorflow/core/kernels/image:mirror_pad_op_test PASSED in 2.1s //tensorflow/core/kernels/image:non_max_suppression_op_benchmark_test PASSED in 1.7s //tensorflow/core/kernels/image:non_max_suppression_op_test PASSED in 2.8s //tensorflow/core/kernels/image:resize_area_op_test PASSED in 2.2s //tensorflow/core/kernels/image:resize_benchmark_test_cpu PASSED in 2.2s //tensorflow/core/kernels/image:resize_ops_test_cpu PASSED in 9.1s //tensorflow/core/kernels/image:sampling_kernels_test PASSED in 1.4s //tensorflow/core/kernels/image:scale_and_translate_op_test PASSED in 3.8s //tensorflow/core/kernels/linalg:banded_triangular_solve_op_test_cpu PASSED in 0.8s //tensorflow/core/kernels/linalg:matrix_triangular_solve_op_test_cpu PASSED in 1.5s //tensorflow/core/kernels/mkl:mkl_conv_ops_test PASSED in 0.7s //tensorflow/core/kernels/mkl:mkl_dequantize_op_test PASSED in 1.3s //tensorflow/core/kernels/mkl:mkl_fused_batch_norm_op_test PASSED in 1.1s //tensorflow/core/kernels/mkl:mkl_fused_ops_test PASSED in 5.1s //tensorflow/core/kernels/mkl:mkl_matmul_op_benchmark PASSED in 1.3s //tensorflow/core/kernels/mkl:mkl_qmatmul_op_test PASSED in 1.7s //tensorflow/core/kernels/mkl:mkl_quantize_op_test PASSED in 1.4s //tensorflow/core/kernels/mkl:mkl_quantized_concat_op_test PASSED in 1.3s //tensorflow/core/kernels/mkl:mkl_quantized_conv_ops_perchannel_test PASSED in 0.9s //tensorflow/core/kernels/mkl:mkl_quantized_conv_ops_test PASSED in 1.2s //tensorflow/core/kernels/mkl:mkl_quantized_pooling_ops_test PASSED in 1.6s //tensorflow/core/kernels/mkl:mkl_relu_op_test PASSED in 1.0s //tensorflow/core/kernels/mkl:mkl_requantize_ops_test PASSED in 1.0s //tensorflow/core/kernels/mkl:mkl_sparse_matrix_matmul_op_benchmark PASSED in 1.2s //tensorflow/core/kernels/mkl:mkl_swish_op_test PASSED in 1.2s //tensorflow/core/kernels/mkl:onednn_nn_ops_benchmark PASSED in 1.1s //tensorflow/core/kernels/sparse:kernels_test PASSED in 1.5s //tensorflow/core/kernels/uniform_quant_ops:math_utils_test PASSED in 0.5s //tensorflow/core/kernels/uniform_quant_ops:tensor_utils_test PASSED in 0.9s //tensorflow/core/kernels/uniform_quant_ops:uniform_dequantize_op_test PASSED in 2.4s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantize_op_test PASSED in 1.7s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_add_op_test PASSED in 1.7s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_clip_by_value_op_test PASSED in 1.1s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_convolution_ops_test PASSED in 3.0s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_dot_ops_test PASSED in 2.0s //tensorflow/core/kernels/uniform_quant_ops:uniform_requantize_op_test PASSED in 1.7s //tensorflow/core/lib/db:sqlite_test PASSED in 0.7s //tensorflow/core/lib/gif:lib_gif_io_test PASSED in 5.3s //tensorflow/core/lib/jpeg:lib_jpeg_jpeg_mem_unittest PASSED in 1.5s //tensorflow/core/ops:cudnn_rnn_ops_test_cc PASSED in 1.6s //tensorflow/core/ops:ops_array_grad_test PASSED in 3.6s //tensorflow/core/ops:ops_math_grad_test PASSED in 8.9s //tensorflow/core/ops:ops_tests PASSED in 1.5s //tensorflow/core/ops/compat:backwards_compatibility_test PASSED in 6.1s //tensorflow/core/platform:enable_tf2_utils_test PASSED in 1.2s //tensorflow/core/platform:env_test PASSED in 4.4s //tensorflow/core/platform:fake_python_env_test PASSED in 1.2s //tensorflow/core/platform:file_system_test PASSED in 4.4s //tensorflow/core/platform:platform_strings_test PASSED in 0.9s //tensorflow/core/platform:ram_file_system_test PASSED in 23.8s //tensorflow/core/platform:resource_loader_test PASSED in 0.8s //tensorflow/core/platform:vmodule_benchmark_test PASSED in 0.9s //tensorflow/core/platform:vmodule_test PASSED in 1.1s //tensorflow/core/profiler/convert:dcn_analysis_test PASSED in 0.8s //tensorflow/core/profiler/convert:dcn_utils_test PASSED in 0.8s //tensorflow/core/profiler/convert:hlo_proto_to_graph_view_test PASSED in 0.7s //tensorflow/core/profiler/convert:hlo_proto_to_memory_visualization_utils_test PASSED in 1.1s //tensorflow/core/profiler/convert:op_stats_combiner_test PASSED in 1.1s //tensorflow/core/profiler/convert:op_stats_to_pod_stats_test PASSED in 1.1s //tensorflow/core/profiler/convert:op_stats_to_pod_viewer_test PASSED in 1.0s //tensorflow/core/profiler/convert:op_stats_to_tf_stats_test PASSED in 0.8s //tensorflow/core/profiler/convert:repository_test PASSED in 0.6s //tensorflow/core/profiler/convert:xplane_to_dcn_collective_stats_test PASSED in 0.9s //tensorflow/core/profiler/convert:xplane_to_kernel_stats_db_test PASSED in 0.9s //tensorflow/core/profiler/convert:xplane_to_memory_profile_test PASSED in 0.7s //tensorflow/core/profiler/convert:xplane_to_op_metrics_db_test PASSED in 0.6s //tensorflow/core/profiler/convert:xplane_to_op_stats_test PASSED in 1.1s //tensorflow/core/profiler/convert:xplane_to_step_events_test PASSED in 0.8s //tensorflow/core/profiler/convert:xplane_to_tf_functions_test PASSED in 0.5s //tensorflow/core/profiler/convert:xplane_to_tool_names_test PASSED in 1.1s //tensorflow/core/profiler/convert/trace_viewer:trace_viewer_visibility_test PASSED in 0.8s //tensorflow/core/profiler/internal:tfprof_show_test PASSED in 1.9s //tensorflow/core/profiler/internal:tfprof_stats_test PASSED in 1.2s //tensorflow/core/profiler/internal:tfprof_tensor_test PASSED in 1.4s //tensorflow/core/profiler/internal:tfprof_timeline_test PASSED in 2.8s //tensorflow/core/profiler/internal/advisor:tfprof_advisor_test PASSED in 1.4s //tensorflow/core/profiler/lib:profiler_disabled_test PASSED in 1.0s //tensorflow/core/profiler/utils:derived_timeline_test PASSED in 1.0s //tensorflow/core/profiler/utils:kernel_stats_utils_test PASSED in 0.9s //tensorflow/core/profiler/utils:op_metrics_db_utils_test PASSED in 1.0s //tensorflow/core/profiler/utils:step_intersection_test PASSED in 1.0s //tensorflow/core/runtime_fallback/util:type_util_test PASSED in 0.8s //tensorflow/core/summary:schema_test PASSED in 0.8s //tensorflow/core/summary:summary_db_writer_test PASSED in 1.2s //tensorflow/core/summary:summary_file_writer_test PASSED in 1.1s //tensorflow/core/tfrt/common:pjrt_cpu_client_registration_test PASSED in 12.3s //tensorflow/core/tfrt/common:pjrt_state_test PASSED in 21.4s //tensorflow/core/tfrt/common:pjrt_util_test PASSED in 16.8s //tensorflow/core/tfrt/fallback:cost_recorder_test PASSED in 1.0s //tensorflow/core/tfrt/fallback:fallback_state_test PASSED in 2.0s //tensorflow/core/tfrt/graph_executor:config_test PASSED in 0.6s //tensorflow/core/tfrt/mlrt/attribute:attribute_test PASSED in 1.5s //tensorflow/core/tfrt/mlrt/bytecode:bytecode_test PASSED in 1.0s //tensorflow/core/tfrt/mlrt/bytecode:executable_test PASSED in 0.8s //tensorflow/core/tfrt/mlrt/bytecode:function_test PASSED in 0.7s //tensorflow/core/tfrt/mlrt/bytecode:kernel_test PASSED in 0.6s //tensorflow/core/tfrt/mlrt/bytecode:span_test PASSED in 0.5s //tensorflow/core/tfrt/mlrt/interpreter:context_test PASSED in 0.7s //tensorflow/core/tfrt/mlrt/interpreter:future_test PASSED in 0.8s //tensorflow/core/tfrt/mlrt/interpreter:interpreter_test PASSED in 2.3s //tensorflow/core/tfrt/mlrt/interpreter:register_span_test PASSED in 1.5s //tensorflow/core/tfrt/mlrt/interpreter:value_test PASSED in 1.0s //tensorflow/core/tfrt/run_handler_thread_pool:run_handler_concurrent_work_queue_test PASSED in 2.4s //tensorflow/core/tfrt/run_handler_thread_pool:run_handler_test PASSED in 3.2s //tensorflow/core/tfrt/run_handler_thread_pool:run_handler_util_test PASSED in 0.7s //tensorflow/core/tfrt/runtime:tf_threadpool_concurrent_work_queue_test PASSED in 2.4s //tensorflow/core/tfrt/runtime:work_queue_interface_test PASSED in 1.8s //tensorflow/core/tfrt/utils:graph_partition_test PASSED in 5.9s //tensorflow/core/transforms:eval_utils_test PASSED in 3.6s //tensorflow/core/transforms:graph_transform_wrapper_test PASSED in 1.3s //tensorflow/core/util:bcast_test PASSED in 1.5s //tensorflow/core/util:command_line_flags_test PASSED in 2.5s //tensorflow/core/util:debug_data_dumper_test PASSED in 3.6s //tensorflow/core/util:debug_events_writer_test PASSED in 1.8s //tensorflow/core/util:dump_graph_test PASSED in 2.9s //tensorflow/core/util:equal_graph_def_test PASSED in 2.8s //tensorflow/core/util:events_writer_test PASSED in 8.2s //tensorflow/core/util:example_proto_fast_parsing_test PASSED in 5.4s //tensorflow/core/util:example_proto_helper_test PASSED in 1.6s //tensorflow/core/util:exec_on_stall_test PASSED in 3.4s //tensorflow/core/util:fake_clock_env_test PASSED in 3.2s //tensorflow/core/util:incremental_barrier_test PASSED in 1.6s //tensorflow/core/util:matmul_bcast_test PASSED in 3.9s //tensorflow/core/util:memmapped_file_system_test PASSED in 3.6s //tensorflow/core/util:mkl_heuristics_test PASSED in 1.2s //tensorflow/core/util:overflow_test PASSED in 0.9s //tensorflow/core/util:presized_cuckoo_map_test PASSED in 6.1s //tensorflow/core/util:ragged_to_dense_util_test PASSED in 1.0s //tensorflow/core/util:reffed_status_callback_test PASSED in 1.8s //tensorflow/core/util:reporter_test PASSED in 2.3s //tensorflow/core/util:saved_tensor_slice_util_test PASSED in 2.9s //tensorflow/core/util:semver_test PASSED in 2.2s //tensorflow/core/util:stat_summarizer_test PASSED in 1.6s //tensorflow/core/util:strided_slice_op_test PASSED in 1.4s //tensorflow/core/util:tensor_format_test PASSED in 3.7s //tensorflow/core/util:tensor_slice_reader_test PASSED in 1.8s //tensorflow/core/util:tensor_slice_set_test PASSED in 2.3s //tensorflow/core/util:tensor_slice_util_test PASSED in 1.7s //tensorflow/core/util:tensor_slice_writer_test PASSED in 3.7s //tensorflow/core/util:work_sharder_test PASSED in 3.4s //tensorflow/core/util/ctc:ctc_beam_search_test PASSED in 0.6s //tensorflow/core/util/proto:descriptor_pool_registry_test PASSED in 0.9s //tensorflow/core/util/proto:proto_utils_test PASSED in 1.6s //tensorflow/core/util/quantization:uniform_quant_ops_params_test PASSED in 0.8s //tensorflow/core/util/sparse:sparse_tensor_test PASSED in 0.6s //tensorflow/core/util/tensor_bundle:tensor_bundle_test PASSED in 29.8s //tensorflow/dtensor/mlir:dtensor_location_test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:annotate_global_shape.mlir.test PASSED in 2.4s //tensorflow/dtensor/mlir/tests:cluster_function_conversion.mlir.test PASSED in 2.4s //tensorflow/dtensor/mlir/tests:constant_folding.mlir.test PASSED in 3.1s //tensorflow/dtensor/mlir/tests:decompose_controlflow.mlir.test PASSED in 3.6s //tensorflow/dtensor/mlir/tests:designate_resource_handle_mesh.mlir.test PASSED in 2.1s //tensorflow/dtensor/mlir/tests:device_mesh_cluster_coarsening.mlir.test PASSED in 3.0s //tensorflow/dtensor/mlir/tests:dtensor_all_gather.mlir.test PASSED in 2.8s //tensorflow/dtensor/mlir/tests:dtensor_all_scatter.mlir.test PASSED in 3.3s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_combine_optimization.mlir.test PASSED in 3.5s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_lowering.mlir.test PASSED in 3.3s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_scatter_optimization.mlir.test PASSED in 2.0s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_sum_optimization.mlir.test PASSED in 2.4s //tensorflow/dtensor/mlir/tests:dtensor_alltoall_lowering.mlir.test PASSED in 1.8s //tensorflow/dtensor/mlir/tests:dtensor_collective_type_lowering.mlir.test PASSED in 2.8s //tensorflow/dtensor/mlir/tests:dtensor_layout_must_execute.mlir.test PASSED in 1.9s //tensorflow/dtensor/mlir/tests:dtensor_layout_to_xla_sharding_op.mlir.test PASSED in 2.2s //tensorflow/dtensor/mlir/tests:dtensor_mixed_precision_reduce.mlir.test PASSED in 3.5s //tensorflow/dtensor/mlir/tests:dtensor_reduce_scatter_lowering.mlir.test PASSED in 2.2s //tensorflow/dtensor/mlir/tests:dtensor_remove_dtensorlayout.mlir.test PASSED in 2.3s //tensorflow/dtensor/mlir/tests:dtensor_replace_auxiliary_layout_op.mlir.test PASSED in 1.9s //tensorflow/dtensor/mlir/tests:dtensor_replace_relayout_with_identity.mlir.test PASSED in 2.3s //tensorflow/dtensor/mlir/tests:dtensor_set_hlo_sharding.mlir.test PASSED in 1.7s //tensorflow/dtensor/mlir/tests:dtensor_set_hlo_sharding_default.mlir.test PASSED in 2.0s //tensorflow/dtensor/mlir/tests:dtensor_xla_spmd_integration.mlir.test PASSED in 2.6s //tensorflow/dtensor/mlir/tests:elide_identity_before_copy_to_mesh.mlir.test PASSED in 1.8s //tensorflow/dtensor/mlir/tests:function_renaming.mlir.test PASSED in 1.5s //tensorflow/dtensor/mlir/tests:handle_cross_cluster_dependencies.mlir.test PASSED in 2.4s //tensorflow/dtensor/mlir/tests:handle_sparsetensors.mlir.test PASSED in 1.9s //tensorflow/dtensor/mlir/tests:layout_propagation_v2.mlir.test PASSED in 2.7s //tensorflow/dtensor/mlir/tests:lower_send_recv.mlir.test PASSED in 1.8s //tensorflow/dtensor/mlir/tests:merge_clusters.mlir.test PASSED in 2.1s //tensorflow/dtensor/mlir/tests:mesh_propagation.mlir.test PASSED in 1.8s //tensorflow/dtensor/mlir/tests:multi_device_expansion.mlir.test PASSED in 1.5s //tensorflow/dtensor/mlir/tests:op_to_device_cluster.mlir.test PASSED in 2.3s //tensorflow/dtensor/mlir/tests:propagate_default_layout.mlir.test PASSED in 1.8s //tensorflow/dtensor/mlir/tests:propagate_device_id_to_function.mlir.test PASSED in 2.3s //tensorflow/dtensor/mlir/tests:restore_and_assign.mlir.test PASSED in 2.7s //tensorflow/dtensor/mlir/tests:restore_shape_inference.mlir.test PASSED in 1.7s //tensorflow/dtensor/mlir/tests:set_default_sharding.mlir.test PASSED in 1.8s //tensorflow/dtensor/mlir/tests:sparse_expansion.mlir.test PASSED in 1.6s //tensorflow/dtensor/mlir/tests:spmd_batchparallel.mlir.test PASSED in 1.8s //tensorflow/dtensor/mlir/tests:spmd_concat.mlir.test PASSED in 2.9s //tensorflow/dtensor/mlir/tests:spmd_conv.mlir.test PASSED in 3.1s //tensorflow/dtensor/mlir/tests:spmd_einsum.mlir.test PASSED in 2.4s //tensorflow/dtensor/mlir/tests:spmd_expansion.mlir.test PASSED in 2.2s //tensorflow/dtensor/mlir/tests:spmd_fft.mlir.test PASSED in 2.3s //tensorflow/dtensor/mlir/tests:spmd_io_ops.mlir.test PASSED in 1.7s //tensorflow/dtensor/mlir/tests:spmd_iterator.mlir.test PASSED in 2.9s //tensorflow/dtensor/mlir/tests:spmd_matmul.mlir.test PASSED in 4.0s //tensorflow/dtensor/mlir/tests:spmd_random.mlir.test PASSED in 3.8s //tensorflow/dtensor/mlir/tests:spmd_save_restore.mlir.test PASSED in 3.5s //tensorflow/dtensor/mlir/tests:spmd_segment_sum.mlir.test PASSED in 3.4s //tensorflow/dtensor/mlir/tests:spmd_slice.mlir.test PASSED in 3.4s //tensorflow/dtensor/mlir/tests:spmd_softmax_loss.mlir.test PASSED in 1.8s //tensorflow/dtensor/mlir/tests:spmd_squeeze.mlir.test PASSED in 3.0s //tensorflow/dtensor/mlir/tests:spmd_var_handle.mlir.test PASSED in 2.7s //tensorflow/dtensor/mlir/tests:tf_dtensor_ops.mlir.test PASSED in 2.6s //tensorflow/dtensor/mlir/tests:tpu_add_resource_device_attribute.mlir.test PASSED in 4.0s //tensorflow/dtensor/mlir/tests:tpu_integration.mlir.test PASSED in 3.0s //tensorflow/dtensor/mlir/tests:undo_merge_const_across_mesh.mlir.test PASSED in 2.2s //tensorflow/dtensor/mlir/tests:update_tpu_metadata.mlir.test PASSED in 2.1s //tensorflow/dtensor/python/tests:api_test PASSED in 111.4s //tensorflow/dtensor/python/tests:array_ops_test_cpu PASSED in 66.5s //tensorflow/dtensor/python/tests:cache_test_cpu PASSED in 60.3s //tensorflow/dtensor/python/tests:collective_combine_all_reduce_test_cpu PASSED in 98.7s //tensorflow/dtensor/python/tests:collective_test_cpu PASSED in 61.5s //tensorflow/dtensor/python/tests:config_test_cpu PASSED in 27.7s //tensorflow/dtensor/python/tests:device_test_cpu PASSED in 103.0s //tensorflow/dtensor/python/tests:layout_test_cpu PASSED in 72.2s //tensorflow/dtensor/python/tests:mesh_util_test_cpu PASSED in 40.2s //tensorflow/dtensor/python/tests:multi_client_test_cpu PASSED in 47.5s //tensorflow/dtensor/python/tests:numpy_util_test_cpu PASSED in 29.3s //tensorflow/dtensor/python/tests:variable_test_cpu PASSED in 51.2s //tensorflow/dtensor/tests:dtensor_operation_test PASSED in 39.5s //tensorflow/dtensor/tests:executable_manager_test PASSED in 38.8s //tensorflow/dtensor/tests:layout_to_xla_sharding_test PASSED in 0.4s //tensorflow/dtensor/tests:slice_util_test PASSED in 1.0s //tensorflow/dtensor/tests:spmd_expander_test PASSED in 7.7s //tensorflow/dtensor/tests:tensor_layout_test PASSED in 0.5s //tensorflow/examples/adding_an_op:fact_test PASSED in 145.1s //tensorflow/examples/adding_an_op:zero_out_1_test PASSED in 135.5s //tensorflow/examples/adding_an_op:zero_out_2_test PASSED in 141.2s //tensorflow/examples/adding_an_op:zero_out_3_test PASSED in 139.7s //tensorflow/examples/custom_ops_doc/multiplex_1:multiplex_1_test PASSED in 129.8s //tensorflow/examples/custom_ops_doc/multiplex_2:multiplex_2_test_cpu PASSED in 273.6s //tensorflow/examples/custom_ops_doc/multiplex_3:multiplex_3_test PASSED in 190.5s //tensorflow/examples/custom_ops_doc/multiplex_4:multiplex_4_test PASSED in 153.9s //tensorflow/examples/custom_ops_doc/simple_hash_table:simple_hash_table_test PASSED in 156.4s //tensorflow/examples/custom_ops_doc/sleep:sleep_test PASSED in 123.7s //tensorflow/examples/speech_commands:accuracy_utils_test PASSED in 4.2s //tensorflow/examples/speech_commands:models_test PASSED in 142.4s //tensorflow/examples/speech_commands:recognize_commands_test PASSED in 7.7s //tensorflow/examples/wav_to_spectrogram:wav_to_spectrogram_test PASSED in 6.5s //tensorflow/js:ts_op_gen_test PASSED in 1.0s //tensorflow/python/autograph/converters:asserts_test PASSED in 27.9s //tensorflow/python/autograph/converters:break_statements_test PASSED in 36.6s //tensorflow/python/autograph/converters:call_trees_test PASSED in 89.2s //tensorflow/python/autograph/converters:conditional_expressions_test PASSED in 45.8s //tensorflow/python/autograph/converters:continue_statements_test PASSED in 45.7s //tensorflow/python/autograph/converters:control_flow_test PASSED in 52.2s //tensorflow/python/autograph/converters:directives_test PASSED in 40.1s //tensorflow/python/autograph/converters:functions_test PASSED in 33.8s //tensorflow/python/autograph/converters:lists_test PASSED in 51.4s //tensorflow/python/autograph/converters:logical_expressions_test PASSED in 33.8s //tensorflow/python/autograph/converters:return_statements_test PASSED in 86.8s //tensorflow/python/autograph/converters:slices_test PASSED in 37.2s //tensorflow/python/autograph/converters:variables_test PASSED in 58.4s //tensorflow/python/autograph/core:converter_test PASSED in 28.0s //tensorflow/python/autograph/core:function_wrappers_test PASSED in 31.8s //tensorflow/python/autograph/impl:api_test PASSED in 66.5s //tensorflow/python/autograph/impl:conversion_test PASSED in 40.9s //tensorflow/python/autograph/lang:special_functions_test PASSED in 24.9s //tensorflow/python/autograph/operators:conditional_expressions_test PASSED in 46.5s //tensorflow/python/autograph/operators:control_flow_test PASSED in 82.1s //tensorflow/python/autograph/operators:data_structures_test PASSED in 49.6s //tensorflow/python/autograph/operators:exceptions_test PASSED in 34.1s //tensorflow/python/autograph/operators:logical_test PASSED in 37.6s //tensorflow/python/autograph/operators:py_builtins_test PASSED in 64.9s //tensorflow/python/autograph/operators:slices_test PASSED in 44.2s //tensorflow/python/autograph/operators:variables_test PASSED in 42.3s //tensorflow/python/autograph/pyct:anno_test PASSED in 78.7s //tensorflow/python/autograph/pyct:ast_util_test PASSED in 45.6s //tensorflow/python/autograph/pyct:cache_test PASSED in 46.5s //tensorflow/python/autograph/pyct:cfg_test PASSED in 57.9s //tensorflow/python/autograph/pyct:error_utils_test PASSED in 42.9s //tensorflow/python/autograph/pyct:inspect_utils_test PASSED in 41.7s //tensorflow/python/autograph/pyct:loader_test PASSED in 45.1s //tensorflow/python/autograph/pyct:naming_test PASSED in 56.5s //tensorflow/python/autograph/pyct:origin_info_test PASSED in 96.7s //tensorflow/python/autograph/pyct:parser_test PASSED in 37.7s //tensorflow/python/autograph/pyct:pretty_printer_test PASSED in 41.6s //tensorflow/python/autograph/pyct:qual_names_test PASSED in 44.9s //tensorflow/python/autograph/pyct:templates_test PASSED in 34.0s //tensorflow/python/autograph/pyct:transformer_test PASSED in 41.2s //tensorflow/python/autograph/pyct:transpiler_test PASSED in 97.9s //tensorflow/python/autograph/pyct/static_analysis:activity_test PASSED in 37.1s //tensorflow/python/autograph/pyct/static_analysis:liveness_test PASSED in 89.8s //tensorflow/python/autograph/pyct/static_analysis:reaching_definitions_test PASSED in 52.6s //tensorflow/python/autograph/pyct/static_analysis:reaching_fndefs_test PASSED in 29.9s //tensorflow/python/autograph/pyct/static_analysis:type_inference_test PASSED in 100.8s //tensorflow/python/autograph/tests:assertion_test PASSED in 136.7s //tensorflow/python/autograph/tests:basic_ifexp_test PASSED in 213.5s //tensorflow/python/autograph/tests:call_to_builtin_function_test PASSED in 205.6s //tensorflow/python/autograph/tests:call_to_lambda_function_test PASSED in 166.7s //tensorflow/python/autograph/tests:call_to_named_tuple_test PASSED in 158.8s //tensorflow/python/autograph/tests:call_to_numpy_function_test PASSED in 169.1s //tensorflow/python/autograph/tests:call_to_print_function_test PASSED in 227.9s //tensorflow/python/autograph/tests:call_to_tf_api_test PASSED in 228.6s //tensorflow/python/autograph/tests:call_to_user_function_test PASSED in 173.0s //tensorflow/python/autograph/tests:composite_names_in_control_flow_test PASSED in 159.9s //tensorflow/python/autograph/tests:cond_basic_test PASSED in 190.3s //tensorflow/python/autograph/tests:datasets_test PASSED in 190.7s //tensorflow/python/autograph/tests:early_return_test PASSED in 240.5s //tensorflow/python/autograph/tests:ext_slice_test PASSED in 261.7s //tensorflow/python/autograph/tests:generator_test PASSED in 174.5s //tensorflow/python/autograph/tests:logical_expression_test PASSED in 172.9s //tensorflow/python/autograph/tests:loop_basic_test PASSED in 393.6s //tensorflow/python/autograph/tests:loop_control_flow_illegal_cases_test PASSED in 191.1s //tensorflow/python/autograph/tests:loop_created_variables_test PASSED in 214.0s //tensorflow/python/autograph/tests:loop_scoping_test PASSED in 195.3s //tensorflow/python/autograph/tests:loop_with_function_call_test PASSED in 194.1s //tensorflow/python/autograph/tests:loop_with_variable_type_illegal_cases_test PASSED in 183.6s //tensorflow/python/autograph/tests:loop_with_variable_type_test PASSED in 184.0s //tensorflow/python/autograph/tests:nested_control_flow_test PASSED in 153.3s //tensorflow/python/autograph/tests:type_annotations_test PASSED in 148.2s //tensorflow/python/autograph/utils:context_managers_test PASSED in 56.3s //tensorflow/python/autograph/utils:misc_test PASSED in 59.4s //tensorflow/python/autograph/utils:tensor_list_test PASSED in 66.1s //tensorflow/python/autograph/utils:tensors_test PASSED in 75.5s //tensorflow/python/checkpoint:checkpoint_management_test_cpu PASSED in 94.9s //tensorflow/python/checkpoint:checkpoint_metrics_test PASSED in 47.4s //tensorflow/python/checkpoint:checkpoint_test PASSED in 203.3s //tensorflow/python/checkpoint:checkpoint_view_test PASSED in 59.1s //tensorflow/python/checkpoint:checkpoint_with_v1_optimizers_test PASSED in 131.4s //tensorflow/python/checkpoint:functional_saver_test_cpu PASSED in 60.6s //tensorflow/python/checkpoint:restore_test PASSED in 48.5s //tensorflow/python/checkpoint:save_util_v1_test PASSED in 50.6s //tensorflow/python/checkpoint:saveable_compat_test PASSED in 108.3s //tensorflow/python/checkpoint:tensor_callable_test PASSED in 109.0s //tensorflow/python/checkpoint:trackable_view_test PASSED in 59.3s //tensorflow/python/checkpoint/sharding:sharding_policies_test PASSED in 143.4s //tensorflow/python/checkpoint/sharding:sharding_util_test PASSED in 49.4s //tensorflow/python/client:device_lib_test_cpu PASSED in 104.2s //tensorflow/python/client:events_writer_test PASSED in 55.7s //tensorflow/python/client:session_list_devices_test PASSED in 104.2s //tensorflow/python/client:session_partial_run_test PASSED in 148.5s //tensorflow/python/client:timeline_test_cpu PASSED in 55.3s //tensorflow/python/client:virtual_gpu_test_cpu PASSED in 87.2s //tensorflow/python/compat:compat_test PASSED in 107.9s //tensorflow/python/compat:disable_v2_behavior_test PASSED in 102.8s //tensorflow/python/compiler/mlir:mlir_test PASSED in 60.3s //tensorflow/python/compiler/tensorrt/test:batch_matmul_test_cpu PASSED in 88.3s //tensorflow/python/compiler/tensorrt/test:biasadd_matmul_test_cpu PASSED in 76.8s //tensorflow/python/compiler/tensorrt/test:bool_test_cpu PASSED in 70.8s //tensorflow/python/compiler/tensorrt/test:cast_test_cpu PASSED in 58.2s //tensorflow/python/compiler/tensorrt/test:concatenation_test_cpu PASSED in 43.1s //tensorflow/python/compiler/tensorrt/test:const_broadcast_test_cpu PASSED in 47.3s //tensorflow/python/compiler/tensorrt/test:data_dependent_shape_test_cpu PASSED in 32.3s //tensorflow/python/compiler/tensorrt/test:dynamic_input_shapes_test_cpu PASSED in 60.2s //tensorflow/python/compiler/tensorrt/test:identity_output_test_cpu PASSED in 43.4s //tensorflow/python/compiler/tensorrt/test:int32_test_cpu PASSED in 85.1s //tensorflow/python/compiler/tensorrt/test:lru_cache_test_cpu PASSED in 145.7s //tensorflow/python/compiler/tensorrt/test:multi_connection_neighbor_engine_test_cpu PASSED in 68.2s //tensorflow/python/compiler/tensorrt/test:neighboring_engine_test_cpu PASSED in 59.7s //tensorflow/python/compiler/tensorrt/test:quantization_test_cpu PASSED in 60.7s //tensorflow/python/compiler/tensorrt/test:rank_two_test_cpu PASSED in 72.7s //tensorflow/python/compiler/tensorrt/test:reshape_transpose_test_cpu PASSED in 76.5s //tensorflow/python/compiler/tensorrt/test:topk_test_cpu PASSED in 141.3s //tensorflow/python/compiler/tensorrt/test:trt_engine_op_shape_test_cpu PASSED in 73.1s //tensorflow/python/compiler/tensorrt/test:trt_mode_test_cpu PASSED in 138.7s //tensorflow/python/compiler/tensorrt/test:unary_test_cpu PASSED in 59.4s //tensorflow/python/compiler/tensorrt/test:vgg_block_nchw_test_cpu PASSED in 58.1s //tensorflow/python/compiler/tensorrt/test:vgg_block_test_cpu PASSED in 58.5s //tensorflow/python/compiler/xla:jit_compile_test_cpu PASSED in 39.9s //tensorflow/python/compiler/xla:jit_test_cpu PASSED in 72.2s //tensorflow/python/compiler/xla:xla_test_cpu PASSED in 126.4s //tensorflow/python/compiler/xla/experimental:xla_sharding_test PASSED in 46.5s //tensorflow/python/data/experimental/kernel_tests:assert_cardinality_test PASSED in 145.1s //tensorflow/python/data/experimental/kernel_tests:assert_next_test PASSED in 81.1s //tensorflow/python/data/experimental/kernel_tests:assert_prev_test PASSED in 67.3s //tensorflow/python/data/experimental/kernel_tests:compression_ops_test PASSED in 81.3s //tensorflow/python/data/experimental/kernel_tests:copy_to_device_test_cpu PASSED in 97.2s //tensorflow/python/data/experimental/kernel_tests:dense_to_sparse_batch_test PASSED in 159.6s //tensorflow/python/data/experimental/kernel_tests:io_test PASSED in 318.8s //tensorflow/python/data/experimental/kernel_tests:iterator_ops_test PASSED in 35.7s //tensorflow/python/data/experimental/kernel_tests:lookup_ops_test PASSED in 125.5s //tensorflow/python/data/experimental/kernel_tests:make_csv_dataset_test PASSED in 130.4s //tensorflow/python/data/experimental/kernel_tests:make_saveable_from_iterator_test PASSED in 47.8s //tensorflow/python/data/experimental/kernel_tests:make_tf_record_dataset_test PASSED in 244.6s //tensorflow/python/data/experimental/kernel_tests:map_defun_op_test PASSED in 82.3s //tensorflow/python/data/experimental/kernel_tests:matching_files_dataset_test PASSED in 276.4s //tensorflow/python/data/experimental/kernel_tests:model_dataset_test PASSED in 58.9s //tensorflow/python/data/experimental/kernel_tests:non_serializable_test PASSED in 75.9s //tensorflow/python/data/experimental/kernel_tests:pad_to_cardinality_test PASSED in 105.1s //tensorflow/python/data/experimental/kernel_tests:prefetch_to_device_test_cpu PASSED in 46.8s //tensorflow/python/data/experimental/kernel_tests:prefetch_with_slack_test PASSED in 72.0s //tensorflow/python/data/experimental/kernel_tests:shuffle_and_repeat_test PASSED in 129.9s //tensorflow/python/data/experimental/kernel_tests:sleep_test PASSED in 74.8s //tensorflow/python/data/experimental/kernel_tests:tf_record_writer_test PASSED in 69.3s //tensorflow/python/data/experimental/kernel_tests:variant_test PASSED in 65.3s //tensorflow/python/data/experimental/kernel_tests:wrap_unwrap_test_cpu PASSED in 50.4s //tensorflow/python/data/experimental/kernel_tests/optimization:filter_fusion_test PASSED in 111.3s //tensorflow/python/data/experimental/kernel_tests/optimization:filter_parallelization_test PASSED in 326.5s //tensorflow/python/data/experimental/kernel_tests/optimization:grappler_test_cpu PASSED in 58.0s //tensorflow/python/data/experimental/kernel_tests/optimization:make_deterministic_test PASSED in 93.4s //tensorflow/python/data/experimental/kernel_tests/optimization:map_and_batch_fusion_test PASSED in 70.7s //tensorflow/python/data/experimental/kernel_tests/optimization:map_and_filter_fusion_test PASSED in 118.1s //tensorflow/python/data/experimental/kernel_tests/optimization:map_fusion_test PASSED in 426.3s //tensorflow/python/data/experimental/kernel_tests/optimization:map_parallelization_test PASSED in 141.2s //tensorflow/python/data/experimental/kernel_tests/optimization:noop_elimination_test PASSED in 79.0s //tensorflow/python/data/experimental/kernel_tests/optimization:seq_interleave_prefetch_test PASSED in 82.5s //tensorflow/python/data/experimental/kernel_tests/service:multi_device_test PASSED in 81.9s //tensorflow/python/data/experimental/service:server_lib_test PASSED in 91.0s //tensorflow/python/data/kernel_tests:as_numpy_iterator_test PASSED in 51.3s //tensorflow/python/data/kernel_tests:bucket_by_sequence_length_test PASSED in 84.6s //tensorflow/python/data/kernel_tests:cache_test PASSED in 227.1s //tensorflow/python/data/kernel_tests:cardinality_test PASSED in 77.7s //tensorflow/python/data/kernel_tests:checkpoint_test PASSED in 110.8s //tensorflow/python/data/kernel_tests:concatenate_test PASSED in 164.9s //tensorflow/python/data/kernel_tests:counter_test PASSED in 228.5s //tensorflow/python/data/kernel_tests:dataset_spec_test PASSED in 90.3s //tensorflow/python/data/kernel_tests:dataset_test PASSED in 97.7s //tensorflow/python/data/kernel_tests:enumerate_test PASSED in 192.4s //tensorflow/python/data/kernel_tests:fingerprint_test PASSED in 65.6s //tensorflow/python/data/kernel_tests:from_sparse_tensor_slices_test PASSED in 65.5s //tensorflow/python/data/kernel_tests:get_single_element_test PASSED in 52.7s //tensorflow/python/data/kernel_tests:ignore_errors_test PASSED in 106.3s //tensorflow/python/data/kernel_tests:io_test PASSED in 198.7s //tensorflow/python/data/kernel_tests:iterator_test_cpu PASSED in 82.5s //tensorflow/python/data/kernel_tests:len_test PASSED in 44.8s //tensorflow/python/data/kernel_tests:optional_test_cpu PASSED in 73.2s //tensorflow/python/data/kernel_tests:options_test PASSED in 63.8s //tensorflow/python/data/kernel_tests:placement_test_cpu PASSED in 45.7s //tensorflow/python/data/kernel_tests:prefetch_test PASSED in 199.9s //tensorflow/python/data/kernel_tests:random_test PASSED in 145.5s //tensorflow/python/data/kernel_tests:range_test PASSED in 177.8s //tensorflow/python/data/kernel_tests:rebatch_test PASSED in 119.1s //tensorflow/python/data/kernel_tests:reduce_test_cpu PASSED in 114.2s //tensorflow/python/data/kernel_tests:scan_test_cpu PASSED in 159.3s //tensorflow/python/data/kernel_tests:sparse_batch_test PASSED in 81.2s //tensorflow/python/data/kernel_tests:unbatch_test PASSED in 112.6s //tensorflow/python/data/util:convert_test PASSED in 32.3s //tensorflow/python/data/util:nest_test PASSED in 47.4s //tensorflow/python/data/util:options_test PASSED in 30.2s //tensorflow/python/data/util:random_seed_test PASSED in 26.0s //tensorflow/python/data/util:sparse_test PASSED in 22.3s //tensorflow/python/data/util:structure_test PASSED in 44.8s //tensorflow/python/data/util:traverse_test PASSED in 53.1s //tensorflow/python/debug/cli:analyzer_cli_test_cpu PASSED in 40.7s //tensorflow/python/debug/cli:cli_config_test PASSED in 33.5s //tensorflow/python/debug/cli:cli_shared_test PASSED in 29.8s //tensorflow/python/debug/cli:command_parser_test PASSED in 30.0s //tensorflow/python/debug/cli:debugger_cli_common_test PASSED in 34.1s //tensorflow/python/debug/cli:evaluator_test PASSED in 32.1s //tensorflow/python/debug/cli:profile_analyzer_cli_test PASSED in 68.5s //tensorflow/python/debug/cli:readline_ui_test PASSED in 39.4s //tensorflow/python/debug/cli:tensor_format_test PASSED in 32.2s //tensorflow/python/debug/lib:check_numerics_callback_test_cpu PASSED in 50.8s //tensorflow/python/debug/lib:common_test PASSED in 34.5s //tensorflow/python/debug/lib:debug_data_test PASSED in 32.0s //tensorflow/python/debug/lib:debug_events_monitors_test PASSED in 41.9s //tensorflow/python/debug/lib:debug_events_writer_test PASSED in 41.7s //tensorflow/python/debug/lib:debug_gradients_test_cpu PASSED in 60.0s //tensorflow/python/debug/lib:debug_graph_reconstruction_test_cpu PASSED in 54.1s //tensorflow/python/debug/lib:debug_graphs_test PASSED in 30.5s //tensorflow/python/debug/lib:debug_grappler_test_cpu PASSED in 33.3s //tensorflow/python/debug/lib:debug_utils_test PASSED in 34.6s //tensorflow/python/debug/lib:debug_v2_ops_test_cpu PASSED in 64.3s //tensorflow/python/debug/lib:profiling_test PASSED in 34.7s //tensorflow/python/debug/lib:session_debug_file_test_cpu PASSED in 64.9s //tensorflow/python/debug/lib:session_debug_multi_gpu_test_cpu PASSED in 30.7s //tensorflow/python/debug/lib:source_utils_test PASSED in 51.2s //tensorflow/python/debug/wrappers:disk_usage_test PASSED in 24.6s //tensorflow/python/debug/wrappers:dumping_wrapper_test PASSED in 40.8s //tensorflow/python/debug/wrappers:framework_test PASSED in 42.4s //tensorflow/python/debug/wrappers:local_cli_wrapper_test PASSED in 51.0s //tensorflow/python/distribute:checkpoint_utils_test_2gpu PASSED in 60.4s //tensorflow/python/distribute:checkpoint_utils_test_cpu PASSED in 79.0s //tensorflow/python/distribute:checkpointing_test_2gpu PASSED in 71.0s //tensorflow/python/distribute:checkpointing_test_cpu PASSED in 79.5s //tensorflow/python/distribute:collective_util_test PASSED in 52.4s //tensorflow/python/distribute:combinations_test_2gpu PASSED in 70.9s //tensorflow/python/distribute:combinations_test_cpu PASSED in 76.4s //tensorflow/python/distribute:cross_device_utils_test_cpu PASSED in 51.0s //tensorflow/python/distribute:custom_training_loop_gradient_test_2gpu PASSED in 60.9s //tensorflow/python/distribute:custom_training_loop_gradient_test_cpu PASSED in 56.6s //tensorflow/python/distribute:device_util_test_cpu PASSED in 70.9s //tensorflow/python/distribute:distribute_coordinator_test PASSED in 47.4s //tensorflow/python/distribute:distribute_lib_test PASSED in 64.6s //tensorflow/python/distribute:distribute_utils_test_2gpu PASSED in 46.8s //tensorflow/python/distribute:distribute_utils_test_cpu PASSED in 64.9s //tensorflow/python/distribute:input_ops_test_cpu PASSED in 67.2s //tensorflow/python/distribute:metrics_v1_test_2gpu PASSED in 100.8s //tensorflow/python/distribute:metrics_v1_test_cpu PASSED in 86.8s //tensorflow/python/distribute:mirrored_values_test_2gpu PASSED in 87.9s //tensorflow/python/distribute:mirrored_values_test_cpu PASSED in 50.8s //tensorflow/python/distribute:mirrored_variable_test_2gpu PASSED in 131.4s //tensorflow/python/distribute:mirrored_variable_test_cpu PASSED in 90.0s //tensorflow/python/distribute:multi_process_runner_no_init_test PASSED in 40.1s //tensorflow/python/distribute:multi_worker_continuous_run_test_cpu PASSED in 73.9s //tensorflow/python/distribute:multi_worker_util_test PASSED in 27.4s //tensorflow/python/distribute:mwms_pjrt_gpu_test_2gpu PASSED in 49.7s //tensorflow/python/distribute:mwms_pjrt_gpu_test_cpu PASSED in 43.3s //tensorflow/python/distribute:numpy_dataset_test PASSED in 41.2s //tensorflow/python/distribute:one_device_strategy_test_cpu PASSED in 101.6s //tensorflow/python/distribute:packed_distributed_variable_test PASSED in 71.5s //tensorflow/python/distribute:parameter_server_strategy_test_2gpu PASSED in 70.1s //tensorflow/python/distribute:parameter_server_strategy_test_cpu PASSED in 108.2s //tensorflow/python/distribute:parameter_server_strategy_v2_test_2gpu PASSED in 83.5s //tensorflow/python/distribute:parameter_server_strategy_v2_test_cpu PASSED in 111.8s //tensorflow/python/distribute:per_replica_test_2gpu PASSED in 72.5s //tensorflow/python/distribute:per_replica_test_cpu PASSED in 86.2s //tensorflow/python/distribute:ps_values_test_2gpu PASSED in 66.6s //tensorflow/python/distribute:ps_values_test_cpu PASSED in 89.9s //tensorflow/python/distribute:remote_mirrored_strategy_eager_test_cpu PASSED in 45.7s //tensorflow/python/distribute:sharded_variable_test PASSED in 137.0s //tensorflow/python/distribute:shared_variable_creator_test PASSED in 41.4s //tensorflow/python/distribute:strategy_combinations_test_cpu PASSED in 84.1s //tensorflow/python/distribute:template_mirrored_strategy_test_cpu PASSED in 41.8s //tensorflow/python/distribute:test_util_test_2gpu PASSED in 69.1s //tensorflow/python/distribute:test_util_test_cpu PASSED in 81.5s //tensorflow/python/distribute:tf_function_test_2gpu PASSED in 49.3s //tensorflow/python/distribute:tf_function_test_cpu PASSED in 66.4s //tensorflow/python/distribute:values_v2_test_cpu PASSED in 68.0s //tensorflow/python/distribute:warm_starting_util_test_2gpu PASSED in 107.8s //tensorflow/python/distribute:warm_starting_util_test_cpu PASSED in 66.9s //tensorflow/python/distribute/cluster_resolver:base_cluster_resolver_py_test PASSED in 35.7s //tensorflow/python/distribute/cluster_resolver:gce_cluster_resolver_py_test PASSED in 40.7s //tensorflow/python/distribute/cluster_resolver:kubernetes_cluster_resolver_py_test PASSED in 40.2s //tensorflow/python/distribute/cluster_resolver:sagemaker_cluster_resolver_py_test PASSED in 45.1s //tensorflow/python/distribute/cluster_resolver:slurm_cluster_resolver_py_test PASSED in 30.7s //tensorflow/python/distribute/cluster_resolver:tfconfig_cluster_resolver_py_test PASSED in 60.0s //tensorflow/python/distribute/cluster_resolver/tpu:tpu_cluster_resolver_py_test PASSED in 34.6s //tensorflow/python/distribute/coordinator:watchdog_test PASSED in 98.0s //tensorflow/python/distribute/experimental:dtensor_util_test_cpu PASSED in 51.6s //tensorflow/python/distribute/experimental:mirrored_strategy_test_cpu PASSED in 94.5s //tensorflow/python/distribute/experimental:multi_worker_mirrored_strategy_test_cpu PASSED in 46.9s //tensorflow/python/distribute/integration_test:saved_model_test_cpu PASSED in 204.6s //tensorflow/python/distribute/parallel_device:parallel_device_test_cpu PASSED in 70.6s //tensorflow/python/distribute/v1:all_reduce_test PASSED in 135.3s //tensorflow/python/distribute/v1:cross_device_ops_test_cpu PASSED in 162.8s //tensorflow/python/dlpack:dlpack_test_cpu PASSED in 33.9s //tensorflow/python/eager:backprop_test_cpu PASSED in 266.6s //tensorflow/python/eager:cancellation_test_cpu PASSED in 48.7s //tensorflow/python/eager:context_test_cpu PASSED in 52.0s //tensorflow/python/eager:core_test_cpu PASSED in 68.1s //tensorflow/python/eager:gradient_input_output_exclusions_test PASSED in 236.8s //tensorflow/python/eager:graph_only_ops_test_cpu PASSED in 31.8s //tensorflow/python/eager:lift_to_graph_test PASSED in 66.7s //tensorflow/python/eager:monitoring_test_cpu PASSED in 68.9s //tensorflow/python/eager:ops_test_cpu PASSED in 77.5s //tensorflow/python/eager:profiler_client_test PASSED in 38.5s //tensorflow/python/eager:profiler_test_cpu PASSED in 43.0s //tensorflow/python/eager:pywrap_tfe_test PASSED in 57.4s //tensorflow/python/eager:record_test PASSED in 44.5s //tensorflow/python/eager:run_eager_op_as_function_test_cpu PASSED in 38.9s //tensorflow/python/eager:run_eager_op_as_function_xla_test_cpu PASSED in 51.9s //tensorflow/python/eager:tensor_test_cpu PASSED in 49.5s //tensorflow/python/eager:wrap_function_device_test_cpu PASSED in 72.2s //tensorflow/python/eager:wrap_function_test PASSED in 47.6s //tensorflow/python/eager/memory_tests:remote_memory_test_cpu PASSED in 49.5s //tensorflow/python/eager/polymorphic_function:argument_naming_test_cpu PASSED in 40.6s //tensorflow/python/eager/polymorphic_function:atomic_function_test_cpu PASSED in 32.6s //tensorflow/python/eager/polymorphic_function:collection_test_cpu PASSED in 39.4s //tensorflow/python/eager/polymorphic_function:compiler_ir_test_cpu PASSED in 35.9s //tensorflow/python/eager/polymorphic_function:compiler_ir_test_cpu_mlir_bridge_test PASSED in 47.2s //tensorflow/python/eager/polymorphic_function:concrete_function_test_cpu PASSED in 35.6s //tensorflow/python/eager/polymorphic_function:function_spec_test PASSED in 29.4s //tensorflow/python/eager/polymorphic_function:polymorphic_function_xla_test_cpu PASSED in 39.2s //tensorflow/python/eager/polymorphic_function:tracing_compilation_test PASSED in 65.7s //tensorflow/python/feature_column:sequence_feature_column_integration_test PASSED in 102.7s //tensorflow/python/feature_column:serialization_test PASSED in 69.0s //tensorflow/python/framework:auto_control_deps_test PASSED in 94.3s //tensorflow/python/framework:c_api_util_test PASSED in 45.6s //tensorflow/python/framework:common_shapes_test PASSED in 38.8s //tensorflow/python/framework:composite_tensor_test PASSED in 74.0s //tensorflow/python/framework:config_test_2gpu PASSED in 49.7s //tensorflow/python/framework:config_test_cpu PASSED in 55.1s //tensorflow/python/framework:constant_op_test PASSED in 55.2s //tensorflow/python/framework:device_spec_test PASSED in 37.7s //tensorflow/python/framework:device_test PASSED in 55.3s //tensorflow/python/framework:dtypes_test PASSED in 219.2s //tensorflow/python/framework:error_interpolation_test PASSED in 35.3s //tensorflow/python/framework:errors_test PASSED in 57.0s //tensorflow/python/framework:extension_type_field_test PASSED in 45.2s //tensorflow/python/framework:extension_type_test PASSED in 87.0s //tensorflow/python/framework:file_system_test PASSED in 82.1s //tensorflow/python/framework:flexible_dtypes_test PASSED in 187.8s //tensorflow/python/framework:function_def_to_graph_test PASSED in 72.1s //tensorflow/python/framework:graph_util_test PASSED in 46.8s //tensorflow/python/framework:immutable_dict_test PASSED in 36.1s //tensorflow/python/framework:importer_test PASSED in 53.4s //tensorflow/python/framework:indexed_slices_test PASSED in 54.0s //tensorflow/python/framework:kernels_test PASSED in 50.5s //tensorflow/python/framework:meta_graph_test PASSED in 56.4s //tensorflow/python/framework:node_file_writer_test_cpu PASSED in 42.0s //tensorflow/python/framework:offset_counter_helper_test PASSED in 0.8s //tensorflow/python/framework:op_allowlist_namespace_test PASSED in 13.3s //tensorflow/python/framework:op_callbacks_test_cpu PASSED in 81.1s //tensorflow/python/framework:op_def_library_test PASSED in 47.9s //tensorflow/python/framework:op_def_util_test PASSED in 56.1s //tensorflow/python/framework:ops_enable_eager_test PASSED in 27.0s //tensorflow/python/framework:ops_test PASSED in 57.4s //tensorflow/python/framework:proto_test PASSED in 47.0s //tensorflow/python/framework:py_context_manager_test PASSED in 41.4s //tensorflow/python/framework:python_api_dispatcher_test PASSED in 43.4s //tensorflow/python/framework:python_api_info_test PASSED in 76.1s //tensorflow/python/framework:python_api_parameter_converter_test PASSED in 44.9s //tensorflow/python/framework:python_op_gen_annotation_test PASSED in 14.2s //tensorflow/python/framework:python_op_gen_annotator_test PASSED in 1.0s //tensorflow/python/framework:python_op_gen_test PASSED in 0.9s //tensorflow/python/framework:python_tensor_converter_test PASSED in 36.0s //tensorflow/python/framework:random_seed_test PASSED in 89.9s //tensorflow/python/framework:registry_test PASSED in 50.9s //tensorflow/python/framework:smart_cond_test PASSED in 45.0s //tensorflow/python/framework:sparse_tensor_test PASSED in 44.6s //tensorflow/python/framework:subscribe_test PASSED in 40.1s //tensorflow/python/framework:tensor_shape_test PASSED in 30.0s //tensorflow/python/framework:tensor_test PASSED in 83.9s //tensorflow/python/framework:tensor_util_test PASSED in 85.4s //tensorflow/python/framework:test_combinations_test PASSED in 36.6s //tensorflow/python/framework:test_util_test_cpu PASSED in 56.6s //tensorflow/python/framework:tf2_test PASSED in 44.0s //tensorflow/python/framework:traceable_stack_test PASSED in 42.5s //tensorflow/python/framework:type_spec_test PASSED in 74.8s //tensorflow/python/framework:versions_test PASSED in 58.4s //tensorflow/python/framework:weak_tensor_test PASSED in 97.4s //tensorflow/python/framework/experimental:unified_api_test_cpu PASSED in 103.9s //tensorflow/python/grappler:arithmetic_optimizer_test_cpu PASSED in 39.3s //tensorflow/python/grappler:auto_mixed_precision_test_cpu PASSED in 77.5s //tensorflow/python/grappler:constant_folding_test_cpu PASSED in 42.8s //tensorflow/python/grappler:cost_analyzer_test PASSED in 68.4s //tensorflow/python/grappler:datasets_test PASSED in 66.6s //tensorflow/python/grappler:item_test PASSED in 45.2s //tensorflow/python/grappler:memory_optimizer_test PASSED in 102.6s //tensorflow/python/grappler:model_analyzer_test PASSED in 51.0s //tensorflow/python/grappler:remapper_test_cpu PASSED in 67.9s //tensorflow/python/grappler:tf_optimizer_test PASSED in 42.3s //tensorflow/python/kernel_tests:benchmark_test_cpu PASSED in 38.9s //tensorflow/python/kernel_tests:check_ops_test_cpu PASSED in 102.4s //tensorflow/python/kernel_tests:collective_ops_multi_worker_test PASSED in 132.2s //tensorflow/python/kernel_tests:composite_tensor_ops_test PASSED in 63.4s //tensorflow/python/kernel_tests:critical_section_test_cpu PASSED in 119.3s //tensorflow/python/kernel_tests:garbage_collection_test PASSED in 34.9s //tensorflow/python/kernel_tests:gradient_correctness_test_cpu PASSED in 43.1s //tensorflow/python/kernel_tests:histogram_ops_test_cpu PASSED in 49.5s //tensorflow/python/kernel_tests:logging_ops_test_cpu PASSED in 80.4s //tensorflow/python/kernel_tests:numerics_test_cpu PASSED in 74.9s //tensorflow/python/kernel_tests:template_test PASSED in 69.8s //tensorflow/python/kernel_tests:trace_op_test_cpu PASSED in 48.5s //tensorflow/python/kernel_tests/array_ops:batch_gather_op_test_cpu PASSED in 48.1s //tensorflow/python/kernel_tests/array_ops:batch_scatter_ops_test PASSED in 46.9s //tensorflow/python/kernel_tests/array_ops:batchtospace_op_test_cpu PASSED in 52.7s //tensorflow/python/kernel_tests/array_ops:bcast_ops_test PASSED in 46.3s //tensorflow/python/kernel_tests/array_ops:bitcast_op_test_cpu PASSED in 40.6s //tensorflow/python/kernel_tests/array_ops:broadcast_to_ops_test_cpu PASSED in 84.3s //tensorflow/python/kernel_tests/array_ops:cast_op_test_cpu PASSED in 67.4s //tensorflow/python/kernel_tests/array_ops:constant_op_eager_test_cpu PASSED in 62.3s //tensorflow/python/kernel_tests/array_ops:constant_op_test_cpu PASSED in 83.7s //tensorflow/python/kernel_tests/array_ops:denormal_test_cpu PASSED in 79.0s //tensorflow/python/kernel_tests/array_ops:depthtospace_op_test_cpu PASSED in 43.0s //tensorflow/python/kernel_tests/array_ops:edit_distance_op_test PASSED in 59.2s //tensorflow/python/kernel_tests/array_ops:fingerprint_op_test PASSED in 65.0s //tensorflow/python/kernel_tests/array_ops:gather_nd_op_test_cpu PASSED in 41.2s //tensorflow/python/kernel_tests/array_ops:identity_n_op_py_test PASSED in 71.7s //tensorflow/python/kernel_tests/array_ops:identity_op_py_test PASSED in 38.3s //tensorflow/python/kernel_tests/array_ops:large_concat_op_test_cpu PASSED in 68.4s //tensorflow/python/kernel_tests/array_ops:manip_ops_test_cpu PASSED in 62.8s //tensorflow/python/kernel_tests/array_ops:one_hot_op_test_cpu PASSED in 51.8s //tensorflow/python/kernel_tests/array_ops:pad_op_test_cpu PASSED in 59.9s //tensorflow/python/kernel_tests/array_ops:reshape_op_test_cpu PASSED in 39.0s //tensorflow/python/kernel_tests/array_ops:reverse_sequence_op_test_cpu PASSED in 47.2s //tensorflow/python/kernel_tests/array_ops:scalar_test_cpu PASSED in 54.7s //tensorflow/python/kernel_tests/array_ops:shape_ops_test_cpu PASSED in 46.1s //tensorflow/python/kernel_tests/array_ops:slice_op_test_cpu PASSED in 59.4s //tensorflow/python/kernel_tests/array_ops:spacetobatch_op_test_cpu PASSED in 50.6s //tensorflow/python/kernel_tests/array_ops:spacetodepth_op_test_cpu PASSED in 46.2s //tensorflow/python/kernel_tests/array_ops:stack_op_test_cpu PASSED in 86.2s //tensorflow/python/kernel_tests/array_ops:unique_op_test_cpu PASSED in 39.9s //tensorflow/python/kernel_tests/array_ops:unstack_op_test_cpu PASSED in 41.1s //tensorflow/python/kernel_tests/array_ops:where_op_test_cpu PASSED in 57.4s //tensorflow/python/kernel_tests/control_flow:cond_v2_test_cpu PASSED in 171.4s //tensorflow/python/kernel_tests/control_flow:control_flow_util_test PASSED in 76.1s //tensorflow/python/kernel_tests/control_flow:control_flow_util_v2_test PASSED in 46.5s //tensorflow/python/kernel_tests/control_flow:py_func_test_cpu PASSED in 65.7s //tensorflow/python/kernel_tests/control_flow:scan_ops_test_cpu PASSED in 218.6s //tensorflow/python/kernel_tests/control_flow:while_v2_test_cpu PASSED in 181.8s //tensorflow/python/kernel_tests/custom_ops:ackermann_test PASSED in 43.9s //tensorflow/python/kernel_tests/custom_ops:duplicate_op_test PASSED in 78.6s //tensorflow/python/kernel_tests/custom_ops:invalid_op_test PASSED in 44.0s //tensorflow/python/kernel_tests/data_structures:conditional_accumulator_test PASSED in 52.4s //tensorflow/python/kernel_tests/data_structures:dynamic_partition_op_test_2gpu PASSED in 60.3s //tensorflow/python/kernel_tests/data_structures:dynamic_partition_op_test_cpu PASSED in 69.4s //tensorflow/python/kernel_tests/data_structures:dynamic_stitch_op_test_cpu PASSED in 65.7s //tensorflow/python/kernel_tests/data_structures:fifo_queue_test PASSED in 64.7s //tensorflow/python/kernel_tests/data_structures:list_ops_test_cpu PASSED in 72.0s //tensorflow/python/kernel_tests/data_structures:listdiff_op_test PASSED in 83.7s //tensorflow/python/kernel_tests/data_structures:lookup_ops_test PASSED in 128.1s //tensorflow/python/kernel_tests/data_structures:map_ops_test PASSED in 49.7s //tensorflow/python/kernel_tests/data_structures:padding_fifo_queue_test_cpu PASSED in 59.4s //tensorflow/python/kernel_tests/data_structures:priority_queue_test PASSED in 48.4s //tensorflow/python/kernel_tests/data_structures:stack_ops_test_cpu PASSED in 71.5s //tensorflow/python/kernel_tests/data_structures:stage_op_test_cpu PASSED in 56.1s //tensorflow/python/kernel_tests/distributions:bernoulli_test_cpu PASSED in 76.0s //tensorflow/python/kernel_tests/distributions:bijector_test_cpu PASSED in 46.2s //tensorflow/python/kernel_tests/distributions:categorical_test_cpu PASSED in 51.1s //tensorflow/python/kernel_tests/distributions:dirichlet_multinomial_test_cpu PASSED in 56.6s //tensorflow/python/kernel_tests/distributions:dirichlet_test_cpu PASSED in 65.9s //tensorflow/python/kernel_tests/distributions:exponential_test_cpu PASSED in 83.7s //tensorflow/python/kernel_tests/distributions:gamma_test_cpu PASSED in 124.4s //tensorflow/python/kernel_tests/distributions:identity_bijector_test_cpu PASSED in 53.1s //tensorflow/python/kernel_tests/distributions:kullback_leibler_test_cpu PASSED in 50.7s //tensorflow/python/kernel_tests/distributions:laplace_test_cpu PASSED in 108.3s //tensorflow/python/kernel_tests/distributions:multinomial_test_cpu PASSED in 39.0s //tensorflow/python/kernel_tests/distributions:normal_test_cpu PASSED in 104.6s //tensorflow/python/kernel_tests/distributions:special_math_test_cpu PASSED in 98.0s //tensorflow/python/kernel_tests/distributions:uniform_test_cpu PASSED in 57.4s //tensorflow/python/kernel_tests/image_ops:attention_ops_test PASSED in 38.1s //tensorflow/python/kernel_tests/image_ops:decode_bmp_op_test PASSED in 36.8s //tensorflow/python/kernel_tests/image_ops:decode_compressed_op_test PASSED in 43.4s //tensorflow/python/kernel_tests/image_ops:decode_image_op_test PASSED in 40.0s //tensorflow/python/kernel_tests/image_ops:decode_png_op_test PASSED in 55.1s //tensorflow/python/kernel_tests/image_ops:decode_raw_op_test PASSED in 35.5s //tensorflow/python/kernel_tests/image_ops:draw_bounding_box_op_test_cpu PASSED in 47.6s //tensorflow/python/kernel_tests/image_ops:extract_image_patches_op_test_cpu PASSED in 46.6s //tensorflow/python/kernel_tests/image_ops:extract_volume_patches_op_test_cpu PASSED in 46.2s //tensorflow/python/kernel_tests/io_ops:checkpoint_ops_test PASSED in 42.0s //tensorflow/python/kernel_tests/io_ops:decode_csv_op_test PASSED in 36.0s //tensorflow/python/kernel_tests/io_ops:io_ops_test PASSED in 38.7s //tensorflow/python/kernel_tests/io_ops:parse_single_example_op_test PASSED in 49.1s //tensorflow/python/kernel_tests/io_ops:parsing_ops_test PASSED in 76.9s //tensorflow/python/kernel_tests/io_ops:reader_ops_test PASSED in 34.6s //tensorflow/python/kernel_tests/io_ops:record_input_test PASSED in 132.0s //tensorflow/python/kernel_tests/io_ops:save_restore_ops_test PASSED in 44.1s //tensorflow/python/kernel_tests/linalg:determinant_op_test_cpu PASSED in 31.6s //tensorflow/python/kernel_tests/linalg:linear_operator_addition_test_cpu PASSED in 63.6s //tensorflow/python/kernel_tests/linalg:linear_operator_test_cpu PASSED in 64.6s //tensorflow/python/kernel_tests/linalg:lu_op_test_cpu PASSED in 44.1s //tensorflow/python/kernel_tests/linalg:matrix_inverse_op_test_cpu PASSED in 49.4s //tensorflow/python/kernel_tests/linalg:matrix_logarithm_op_test PASSED in 171.8s //tensorflow/python/kernel_tests/linalg:matrix_solve_ls_op_test_cpu PASSED in 193.2s //tensorflow/python/kernel_tests/linalg:matrix_solve_op_test_cpu PASSED in 166.0s //tensorflow/python/kernel_tests/linalg:matrix_square_root_op_test_cpu PASSED in 38.2s //tensorflow/python/kernel_tests/linalg:slicing_test_cpu PASSED in 56.3s //tensorflow/python/kernel_tests/linalg/sparse:conjugate_gradient_test_cpu PASSED in 39.6s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_test_cpu PASSED in 30.7s //tensorflow/python/kernel_tests/math_ops:aggregate_ops_test_cpu PASSED in 36.5s //tensorflow/python/kernel_tests/math_ops:argmax_op_test_cpu PASSED in 25.5s //tensorflow/python/kernel_tests/math_ops:banded_triangular_solve_op_test_cpu PASSED in 79.4s //tensorflow/python/kernel_tests/math_ops:basic_gpu_test_cpu PASSED in 28.2s //tensorflow/python/kernel_tests/math_ops:bincount_op_test_cpu PASSED in 29.9s //tensorflow/python/kernel_tests/math_ops:bucketize_op_test_cpu PASSED in 26.9s //tensorflow/python/kernel_tests/math_ops:clip_ops_test PASSED in 41.2s //tensorflow/python/kernel_tests/math_ops:confusion_matrix_test PASSED in 37.2s //tensorflow/python/kernel_tests/math_ops:cross_grad_test_cpu PASSED in 35.7s //tensorflow/python/kernel_tests/math_ops:cumulative_logsumexp_test_cpu PASSED in 33.7s //tensorflow/python/kernel_tests/math_ops:in_topk_op_test_cpu PASSED in 28.7s //tensorflow/python/kernel_tests/math_ops:segment_reduction_ops_d9m_test_cpu PASSED in 18.5s //tensorflow/python/kernel_tests/math_ops:sets_test PASSED in 41.4s //tensorflow/python/kernel_tests/math_ops:topk_op_test_cpu PASSED in 15.7s //tensorflow/python/kernel_tests/math_ops:zero_division_test_cpu PASSED in 21.2s //tensorflow/python/kernel_tests/nn_ops:betainc_op_test_cpu PASSED in 18.1s //tensorflow/python/kernel_tests/nn_ops:bias_op_test_cpu PASSED in 232.5s //tensorflow/python/kernel_tests/nn_ops:conv1d_test_cpu PASSED in 14.4s //tensorflow/python/kernel_tests/nn_ops:conv1d_transpose_test_cpu PASSED in 13.7s //tensorflow/python/kernel_tests/nn_ops:conv2d_transpose_test_cpu PASSED in 14.0s //tensorflow/python/kernel_tests/nn_ops:conv3d_backprop_filter_v2_grad_test_cpu PASSED in 27.8s //tensorflow/python/kernel_tests/nn_ops:conv3d_transpose_test_cpu PASSED in 18.2s //tensorflow/python/kernel_tests/nn_ops:ctc_decoder_ops_test PASSED in 13.2s //tensorflow/python/kernel_tests/nn_ops:ctc_loss_op_test_cpu PASSED in 147.5s //tensorflow/python/kernel_tests/nn_ops:cudnn_d9m_test_cpu PASSED in 10.4s //tensorflow/python/kernel_tests/nn_ops:cudnn_deterministic_ops_test_cpu PASSED in 12.2s //tensorflow/python/kernel_tests/nn_ops:losses_test PASSED in 92.2s //tensorflow/python/kernel_tests/nn_ops:lrn_op_test_cpu PASSED in 42.4s //tensorflow/python/kernel_tests/nn_ops:morphological_ops_test_cpu PASSED in 41.4s //tensorflow/python/kernel_tests/nn_ops:nth_element_op_test_cpu PASSED in 33.8s //tensorflow/python/kernel_tests/nn_ops:pool_test_cpu PASSED in 93.0s //tensorflow/python/kernel_tests/nn_ops:pooling_ops_3d_test_cpu PASSED in 97.0s //tensorflow/python/kernel_tests/nn_ops:relu_op_test_cpu PASSED in 119.9s //tensorflow/python/kernel_tests/nn_ops:softmax_op_test_cpu PASSED in 54.3s //tensorflow/python/kernel_tests/nn_ops:softplus_op_test_cpu PASSED in 81.4s //tensorflow/python/kernel_tests/nn_ops:softsign_op_test_cpu PASSED in 50.6s //tensorflow/python/kernel_tests/nn_ops:xent_op_d9m_test_cpu PASSED in 241.4s //tensorflow/python/kernel_tests/nn_ops:xent_op_test_cpu PASSED in 95.9s //tensorflow/python/kernel_tests/proto:decode_proto_op_test PASSED in 58.7s //tensorflow/python/kernel_tests/proto:descriptor_source_test PASSED in 80.8s //tensorflow/python/kernel_tests/proto:encode_proto_op_test PASSED in 115.0s //tensorflow/python/kernel_tests/quantization_ops:quantization_ops_test PASSED in 61.0s //tensorflow/python/kernel_tests/random:candidate_sampler_ops_test PASSED in 69.0s //tensorflow/python/kernel_tests/random:multinomial_op_test_cpu PASSED in 64.4s //tensorflow/python/kernel_tests/random:parameterized_truncated_normal_op_test_cpu PASSED in 140.8s //tensorflow/python/kernel_tests/random:random_crop_test_cpu PASSED in 108.8s //tensorflow/python/kernel_tests/random:random_grad_test_cpu PASSED in 80.5s //tensorflow/python/kernel_tests/random:random_ops_test_cpu PASSED in 64.1s //tensorflow/python/kernel_tests/random:random_poisson_test_cpu PASSED in 82.9s //tensorflow/python/kernel_tests/random:random_shuffle_queue_test PASSED in 50.8s //tensorflow/python/kernel_tests/random:stateful_random_ops_test_cpu PASSED in 106.8s //tensorflow/python/kernel_tests/signal:mel_ops_test_cpu PASSED in 75.5s //tensorflow/python/kernel_tests/signal:mfcc_ops_test_cpu PASSED in 95.9s //tensorflow/python/kernel_tests/signal:reconstruction_ops_test_cpu PASSED in 107.7s //tensorflow/python/kernel_tests/signal:shape_ops_test_cpu PASSED in 65.8s //tensorflow/python/kernel_tests/sparse_ops:sparse_add_op_test PASSED in 49.1s //tensorflow/python/kernel_tests/sparse_ops:sparse_concat_op_test PASSED in 87.7s //tensorflow/python/kernel_tests/sparse_ops:sparse_conditional_accumulator_test PASSED in 61.8s //tensorflow/python/kernel_tests/sparse_ops:sparse_cross_op_test PASSED in 69.4s //tensorflow/python/kernel_tests/sparse_ops:sparse_matmul_op_test_cpu PASSED in 110.7s //tensorflow/python/kernel_tests/sparse_ops:sparse_reorder_op_test PASSED in 91.1s //tensorflow/python/kernel_tests/sparse_ops:sparse_reshape_op_test PASSED in 50.1s //tensorflow/python/kernel_tests/sparse_ops:sparse_serialization_ops_test PASSED in 96.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_slice_op_test PASSED in 107.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_split_op_test_cpu PASSED in 78.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_grad_test_cpu PASSED in 102.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_op_d9m_test_cpu PASSED in 121.4s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_op_test_cpu PASSED in 146.3s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensors_map_ops_test PASSED in 112.6s //tensorflow/python/kernel_tests/sparse_ops:sparse_to_dense_op_py_test_cpu PASSED in 54.5s //tensorflow/python/kernel_tests/sparse_ops:sparse_xent_op_d9m_test_cpu PASSED in 170.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_xent_op_test_cpu PASSED in 108.8s //tensorflow/python/kernel_tests/sparse_ops:sparsemask_op_test PASSED in 71.2s //tensorflow/python/kernel_tests/strings_ops:as_string_op_test PASSED in 57.9s //tensorflow/python/kernel_tests/strings_ops:base64_ops_test PASSED in 56.0s //tensorflow/python/kernel_tests/strings_ops:reduce_join_op_test_cpu PASSED in 104.2s //tensorflow/python/kernel_tests/strings_ops:regex_full_match_op_test PASSED in 50.6s //tensorflow/python/kernel_tests/strings_ops:regex_replace_op_test PASSED in 54.5s //tensorflow/python/kernel_tests/strings_ops:string_bytes_split_op_test PASSED in 45.5s //tensorflow/python/kernel_tests/strings_ops:string_format_op_test PASSED in 91.6s //tensorflow/python/kernel_tests/strings_ops:string_join_op_test PASSED in 50.7s //tensorflow/python/kernel_tests/strings_ops:string_length_op_test PASSED in 56.8s //tensorflow/python/kernel_tests/strings_ops:string_lower_op_test PASSED in 58.5s //tensorflow/python/kernel_tests/strings_ops:string_split_op_test PASSED in 51.9s //tensorflow/python/kernel_tests/strings_ops:string_strip_op_test PASSED in 68.0s //tensorflow/python/kernel_tests/strings_ops:string_to_hash_bucket_op_test_cpu PASSED in 104.6s //tensorflow/python/kernel_tests/strings_ops:string_to_number_op_test_cpu PASSED in 80.6s //tensorflow/python/kernel_tests/strings_ops:string_upper_op_test PASSED in 55.6s //tensorflow/python/kernel_tests/strings_ops:substr_op_test PASSED in 53.9s //tensorflow/python/kernel_tests/strings_ops:unicode_decode_op_test PASSED in 112.6s //tensorflow/python/kernel_tests/strings_ops:unicode_encode_op_test PASSED in 76.7s //tensorflow/python/kernel_tests/strings_ops:unicode_script_op_test PASSED in 61.4s //tensorflow/python/kernel_tests/strings_ops:unicode_transcode_op_test PASSED in 102.2s //tensorflow/python/kernel_tests/strings_ops:unsorted_segment_join_op_test_cpu PASSED in 72.1s //tensorflow/python/kernel_tests/summary_ops:summary_ops_test_cpu PASSED in 153.5s //tensorflow/python/kernel_tests/summary_ops:summary_v1_audio_op_test_cpu PASSED in 114.3s //tensorflow/python/kernel_tests/summary_ops:summary_v1_image_op_test_cpu PASSED in 106.0s //tensorflow/python/kernel_tests/summary_ops:summary_v1_ops_test PASSED in 78.3s //tensorflow/python/kernel_tests/summary_ops:summary_v1_tensor_op_test PASSED in 87.7s //tensorflow/python/kernel_tests/v1_compat_tests:array_ops_test_cpu PASSED in 49.4s //tensorflow/python/kernel_tests/v1_compat_tests:dense_update_ops_test_cpu PASSED in 55.5s //tensorflow/python/kernel_tests/v1_compat_tests:identity_op_py_test PASSED in 60.1s //tensorflow/python/kernel_tests/v1_compat_tests:scatter_nd_ops_test_cpu PASSED in 41.7s //tensorflow/python/kernel_tests/v1_compat_tests:session_ops_test_cpu PASSED in 62.0s //tensorflow/python/kernel_tests/v1_compat_tests:stack_op_test_cpu PASSED in 112.4s //tensorflow/python/kernel_tests/variables:dense_update_ops_no_tsan_test_cpu PASSED in 68.9s //tensorflow/python/kernel_tests/variables:dense_update_ops_test_cpu PASSED in 61.5s //tensorflow/python/kernel_tests/variables:partitioned_variables_test PASSED in 91.6s //tensorflow/python/kernel_tests/variables:resource_variable_ops_test_cpu PASSED in 155.5s //tensorflow/python/kernel_tests/variables:variable_ops_test_cpu PASSED in 50.8s //tensorflow/python/kernel_tests/variables:variable_scope_test PASSED in 98.4s //tensorflow/python/kernel_tests/variables:variables_test PASSED in 92.9s //tensorflow/python/lib/io:file_io_test PASSED in 68.0s //tensorflow/python/lib/io:tf_record_test PASSED in 95.4s //tensorflow/python/module:module_test PASSED in 71.2s //tensorflow/python/ops:array_grad_test_cpu PASSED in 50.1s //tensorflow/python/ops:array_ops_shape_test PASSED in 79.3s //tensorflow/python/ops:array_ops_test PASSED in 55.0s //tensorflow/python/ops:autograph_ops_test PASSED in 64.9s //tensorflow/python/ops:bincount_ops_test_cpu PASSED in 69.9s //tensorflow/python/ops:bitwise_ops_test_cpu PASSED in 56.9s //tensorflow/python/ops:clip_ops_test PASSED in 62.2s //tensorflow/python/ops:clustering_ops_test PASSED in 49.1s //tensorflow/python/ops:collective_ops_gpu_test_cpu PASSED in 78.9s //tensorflow/python/ops:collective_ops_test PASSED in 111.8s //tensorflow/python/ops:collective_ops_xla_test PASSED in 67.7s //tensorflow/python/ops:compiled_collective_ops_gpu_test_2gpu PASSED in 64.6s //tensorflow/python/ops:compiled_collective_ops_gpu_test_cpu PASSED in 68.2s //tensorflow/python/ops:control_flow_v2_enable_test PASSED in 38.5s //tensorflow/python/ops:control_flow_v2_toggles_test PASSED in 39.0s //tensorflow/python/ops:dequantize_op_test PASSED in 39.5s //tensorflow/python/ops:embedding_ops_test_cpu PASSED in 57.4s //tensorflow/python/ops:factory_ops_test_cpu PASSED in 54.1s //tensorflow/python/ops:functional_ops_test PASSED in 82.9s //tensorflow/python/ops:gradient_checker_v2_test_cpu PASSED in 84.7s //tensorflow/python/ops:gradients_test_cpu PASSED in 81.1s //tensorflow/python/ops:init_ops_test_cpu PASSED in 45.0s //tensorflow/python/ops:init_ops_v2_test_cpu PASSED in 53.9s //tensorflow/python/ops:lookup_ops_async_checkpoint_test PASSED in 88.8s //tensorflow/python/ops:math_grad_test_cpu PASSED in 66.0s //tensorflow/python/ops:math_ops_linspace_test_cpu PASSED in 53.0s //tensorflow/python/ops:math_ops_test_cpu PASSED in 76.8s //tensorflow/python/ops:nn_grad_test_cpu PASSED in 90.5s //tensorflow/python/ops:nn_loss_scaling_utilities_test PASSED in 53.3s //tensorflow/python/ops:nn_test_cpu PASSED in 119.6s //tensorflow/python/ops:nn_xent_test_cpu PASSED in 41.1s //tensorflow/python/ops:op_selector_test PASSED in 38.7s //tensorflow/python/ops:quantized_conv_ops_test PASSED in 48.1s //tensorflow/python/ops:quantized_ops_test PASSED in 49.9s //tensorflow/python/ops:raw_ops_test_cpu PASSED in 48.6s //tensorflow/python/ops:rnn_grad_test_cpu PASSED in 44.0s //tensorflow/python/ops:script_ops_test PASSED in 83.1s //tensorflow/python/ops:sort_ops_test PASSED in 52.3s //tensorflow/python/ops:sparse_bincount_ops_test_cpu PASSED in 52.5s //tensorflow/python/ops:sparse_ops_test PASSED in 49.4s //tensorflow/python/ops:tensor_array_ops_test PASSED in 46.6s //tensorflow/python/ops:variable_spec_test PASSED in 65.0s //tensorflow/python/ops:weak_tensor_array_ops_test PASSED in 35.1s //tensorflow/python/ops:weak_tensor_constant_op_test PASSED in 77.2s //tensorflow/python/ops:weak_tensor_image_ops_test PASSED in 31.7s //tensorflow/python/ops:weak_tensor_math_ops_test PASSED in 52.8s //tensorflow/python/ops:weak_tensor_nn_test_cpu PASSED in 79.5s //tensorflow/python/ops:weak_tensor_np_array_ops_test PASSED in 72.9s //tensorflow/python/ops:weak_tensor_np_math_ops_test PASSED in 43.0s //tensorflow/python/ops:weak_tensor_ops_test PASSED in 117.8s //tensorflow/python/ops/losses:util_test PASSED in 41.2s //tensorflow/python/ops/memory_tests:custom_gradient_memory_test_cpu PASSED in 40.2s //tensorflow/python/ops/numpy_ops:np_array_ops_test_cpu PASSED in 118.9s //tensorflow/python/ops/numpy_ops:np_arrays_test_cpu PASSED in 44.8s //tensorflow/python/ops/numpy_ops:np_dtypes_test_cpu PASSED in 37.0s //tensorflow/python/ops/numpy_ops:np_interop_test_cpu PASSED in 146.2s //tensorflow/python/ops/numpy_ops:np_logic_test_cpu PASSED in 38.7s //tensorflow/python/ops/numpy_ops:np_math_ops_test_cpu PASSED in 57.8s //tensorflow/python/ops/numpy_ops:np_random_test_cpu PASSED in 112.2s //tensorflow/python/ops/numpy_ops:np_utils_test_cpu PASSED in 24.2s //tensorflow/python/ops/numpy_ops/integration_test:np_config_test_cpu PASSED in 110.5s //tensorflow/python/ops/numpy_ops/integration_test:public_symbol_test PASSED in 86.9s //tensorflow/python/ops/parallel_for:array_test_cpu PASSED in 64.8s //tensorflow/python/ops/parallel_for:gradients_test_cpu PASSED in 44.6s //tensorflow/python/ops/parallel_for:pfor_test PASSED in 33.3s //tensorflow/python/ops/parallel_for:xla_control_flow_ops_test_cpu PASSED in 82.4s //tensorflow/python/ops/ragged:convert_to_tensor_or_ragged_tensor_op_test PASSED in 49.1s //tensorflow/python/ops/ragged:ragged_batch_gather_op_test PASSED in 75.7s //tensorflow/python/ops/ragged:ragged_bincount_ops_test_cpu PASSED in 39.5s //tensorflow/python/ops/ragged:ragged_bitcast_op_test PASSED in 26.0s //tensorflow/python/ops/ragged:ragged_boolean_mask_op_test PASSED in 40.5s //tensorflow/python/ops/ragged:ragged_concat_op_test PASSED in 27.9s //tensorflow/python/ops/ragged:ragged_const_op_test PASSED in 32.9s //tensorflow/python/ops/ragged:ragged_constant_value_op_test PASSED in 23.6s //tensorflow/python/ops/ragged:ragged_cross_op_test PASSED in 48.5s //tensorflow/python/ops/ragged:ragged_dispatch_test PASSED in 171.3s //tensorflow/python/ops/ragged:ragged_dynamic_partition_op_test_cpu PASSED in 33.2s //tensorflow/python/ops/ragged:ragged_eager_test PASSED in 52.2s //tensorflow/python/ops/ragged:ragged_expand_dims_op_test PASSED in 24.7s //tensorflow/python/ops/ragged:ragged_factory_ops_test_cpu PASSED in 68.7s //tensorflow/python/ops/ragged:ragged_fill_empty_rows_op_test PASSED in 34.7s //tensorflow/python/ops/ragged:ragged_from_sparse_op_test PASSED in 26.6s //tensorflow/python/ops/ragged:ragged_from_tensor_op_test PASSED in 38.3s //tensorflow/python/ops/ragged:ragged_gather_nd_op_test PASSED in 32.5s //tensorflow/python/ops/ragged:ragged_map_flat_values_op_test PASSED in 29.8s //tensorflow/python/ops/ragged:ragged_map_fn_op_test PASSED in 65.2s //tensorflow/python/ops/ragged:ragged_math_ops_test PASSED in 28.6s //tensorflow/python/ops/ragged:ragged_matmul_op_test PASSED in 56.3s //tensorflow/python/ops/ragged:ragged_merge_dims_op_test PASSED in 58.5s //tensorflow/python/ops/ragged:ragged_one_hot_op_test PASSED in 36.3s //tensorflow/python/ops/ragged:ragged_operators_test PASSED in 47.2s //tensorflow/python/ops/ragged:ragged_placeholder_op_test PASSED in 28.9s //tensorflow/python/ops/ragged:ragged_print_op_test PASSED in 55.7s //tensorflow/python/ops/ragged:ragged_range_op_test PASSED in 45.5s //tensorflow/python/ops/ragged:ragged_rank_op_test PASSED in 61.6s //tensorflow/python/ops/ragged:ragged_reduce_op_test PASSED in 70.1s //tensorflow/python/ops/ragged:ragged_resize_image_op_test PASSED in 44.0s //tensorflow/python/ops/ragged:ragged_reverse_op_test PASSED in 34.1s //tensorflow/python/ops/ragged:ragged_row_lengths_op_test PASSED in 26.7s //tensorflow/python/ops/ragged:ragged_row_splits_to_segment_ids_op_test PASSED in 47.3s //tensorflow/python/ops/ragged:ragged_segment_ids_to_row_splits_op_test PASSED in 22.6s //tensorflow/python/ops/ragged:ragged_segment_op_test PASSED in 69.0s //tensorflow/python/ops/ragged:ragged_size_op_test PASSED in 27.8s //tensorflow/python/ops/ragged:ragged_split_op_test PASSED in 70.3s //tensorflow/python/ops/ragged:ragged_squeeze_op_test PASSED in 68.5s //tensorflow/python/ops/ragged:ragged_stack_op_test PASSED in 30.4s //tensorflow/python/ops/ragged:ragged_tensor_bounding_shape_op_test PASSED in 29.9s //tensorflow/python/ops/ragged:ragged_tensor_shape_test PASSED in 91.8s //tensorflow/python/ops/ragged:ragged_tile_op_test PASSED in 64.7s //tensorflow/python/ops/ragged:ragged_to_sparse_op_test PASSED in 32.2s //tensorflow/python/ops/ragged:ragged_to_tensor_op_test PASSED in 88.7s //tensorflow/python/ops/ragged:ragged_util_test PASSED in 35.9s //tensorflow/python/ops/ragged:ragged_where_op_test PASSED in 55.6s //tensorflow/python/ops/ragged:row_partition_test PASSED in 57.4s //tensorflow/python/ops/ragged:string_ngrams_op_test PASSED in 32.4s //tensorflow/python/ops/ragged:strings_reduce_join_op_test PASSED in 45.0s //tensorflow/python/ops/structured:structured_array_ops_test PASSED in 83.8s //tensorflow/python/ops/structured:structured_tensor_slice_test PASSED in 94.3s //tensorflow/python/ops/structured:structured_tensor_spec_test PASSED in 40.4s //tensorflow/python/ops/structured:structured_tensor_test PASSED in 75.7s //tensorflow/python/ops/v1_compat_tests:gradient_checker_test_cpu PASSED in 36.2s //tensorflow/python/platform:benchmark_test PASSED in 22.1s //tensorflow/python/platform:build_info_test PASSED in 35.6s //tensorflow/python/platform:resource_loader_test PASSED in 15.1s //tensorflow/python/profiler:pprof_profiler_test PASSED in 32.3s //tensorflow/python/profiler:profile_context_test_cpu PASSED in 56.6s //tensorflow/python/profiler:profiler_client_test_cpu PASSED in 32.5s //tensorflow/python/profiler:profiler_test_cpu PASSED in 36.4s //tensorflow/python/profiler:profiler_v2_test_cpu PASSED in 30.3s //tensorflow/python/profiler:profiler_wrapper_test PASSED in 33.4s //tensorflow/python/profiler:tfprof_logger_test PASSED in 33.4s //tensorflow/python/profiler/internal:flops_registry_test PASSED in 25.3s //tensorflow/python/profiler/internal:print_model_analysis_test PASSED in 24.7s //tensorflow/python/profiler/internal:run_metadata_test_cpu PASSED in 29.7s //tensorflow/python/saved_model:fingerprinting_test PASSED in 23.1s //tensorflow/python/saved_model:load_v1_in_v2_test PASSED in 30.4s //tensorflow/python/saved_model:loader_test PASSED in 19.1s //tensorflow/python/saved_model:method_name_updater_test PASSED in 18.0s //tensorflow/python/saved_model:metrics_test PASSED in 20.9s //tensorflow/python/saved_model:nested_structure_coder_test PASSED in 14.2s //tensorflow/python/saved_model:pywrap_saved_model_fingerprinting_test PASSED in 15.6s //tensorflow/python/saved_model:pywrap_saved_model_metrics_test PASSED in 12.8s //tensorflow/python/saved_model:revived_types_test PASSED in 12.8s //tensorflow/python/saved_model:save_context_test PASSED in 11.1s //tensorflow/python/saved_model:save_test PASSED in 47.2s //tensorflow/python/saved_model:saved_model_test PASSED in 28.4s //tensorflow/python/saved_model:signature_def_utils_test PASSED in 13.0s //tensorflow/python/saved_model:simple_save_test PASSED in 13.6s //tensorflow/python/saved_model:tracing_utils_test PASSED in 14.0s //tensorflow/python/saved_model:utils_test PASSED in 11.3s //tensorflow/python/saved_model/model_utils:export_output_test PASSED in 15.1s //tensorflow/python/saved_model/model_utils:export_test PASSED in 15.3s //tensorflow/python/saved_model/model_utils:mode_keys_test PASSED in 13.0s //tensorflow/python/saved_model/registration:registration_saving_test PASSED in 23.1s //tensorflow/python/saved_model/registration:registration_test PASSED in 11.3s //tensorflow/python/saved_model/registration:tf_registration_test PASSED in 26.3s //tensorflow/python/saved_model/tests:variable_wrapper_test PASSED in 13.8s //tensorflow/python/summary:plugin_asset_test PASSED in 11.5s //tensorflow/python/summary:summary_iterator_test PASSED in 11.9s //tensorflow/python/summary:summary_test PASSED in 14.6s //tensorflow/python/summary:summary_v2_test PASSED in 13.1s //tensorflow/python/summary/writer:writer_test PASSED in 21.9s //tensorflow/python/tools:aot_compiled_test PASSED in 21.5s //tensorflow/python/tools:freeze_graph_test PASSED in 13.5s //tensorflow/python/tools:optimize_for_inference_test PASSED in 11.6s //tensorflow/python/tools:print_selective_registration_header_test PASSED in 10.3s //tensorflow/python/tools:saved_model_cli_test PASSED in 21.4s //tensorflow/python/tools:saved_model_utils_test PASSED in 12.0s //tensorflow/python/tools:strip_unused_test PASSED in 11.3s //tensorflow/python/tools/api/generator:create_python_api_test PASSED in 11.7s //tensorflow/python/tools/api/generator:output_init_files_test PASSED in 20.9s //tensorflow/python/tools/api/generator:tensorflow_doc_srcs_test PASSED in 10.2s //tensorflow/python/tools/api/generator2/extractor:extractor_test PASSED in 0.7s //tensorflow/python/tools/api/generator2/generator:generator_test PASSED in 1.0s //tensorflow/python/tools/api/generator2/shared:exported_api_test PASSED in 10.4s //tensorflow/python/tpu:bfloat16_test PASSED in 11.6s //tensorflow/python/tpu:feature_column_test PASSED in 18.8s //tensorflow/python/tpu:topology_test PASSED in 10.2s //tensorflow/python/tpu:tpu_embedding_for_serving_test PASSED in 14.8s //tensorflow/python/tpu:tpu_embedding_v2_utils_test PASSED in 13.2s //tensorflow/python/tpu:tpu_embedding_v3_checkpoint_adapter_test PASSED in 11.9s //tensorflow/python/tpu:tpu_embedding_v3_utils_test PASSED in 11.0s //tensorflow/python/tpu:tpu_infeed_test PASSED in 11.5s //tensorflow/python/tpu:tpu_sharding_test PASSED in 10.5s //tensorflow/python/tpu:tpu_test_wrapper_test PASSED in 10.9s //tensorflow/python/tpu/client:client_py_test PASSED in 13.8s //tensorflow/python/trackable:autotrackable_test PASSED in 14.1s //tensorflow/python/trackable:base_delegate_test PASSED in 16.7s //tensorflow/python/trackable:base_test PASSED in 13.5s //tensorflow/python/trackable:python_state_test PASSED in 12.5s //tensorflow/python/trackable:resource_test PASSED in 11.5s //tensorflow/python/trackable:trackable_utils_test PASSED in 12.1s //tensorflow/python/training:adadelta_test_cpu PASSED in 25.4s //tensorflow/python/training:adagrad_da_test_cpu PASSED in 15.1s //tensorflow/python/training:adagrad_test_cpu PASSED in 20.2s //tensorflow/python/training:adam_test_cpu PASSED in 23.1s //tensorflow/python/training:basic_loops_test_cpu PASSED in 13.3s //tensorflow/python/training:basic_session_run_hooks_test PASSED in 32.0s //tensorflow/python/training:checkpoint_ops_test PASSED in 13.9s //tensorflow/python/training:coordinator_test_cpu PASSED in 20.5s //tensorflow/python/training:device_setter_test_cpu PASSED in 12.7s //tensorflow/python/training:ftrl_test_cpu PASSED in 27.3s //tensorflow/python/training:gradient_descent_test_cpu PASSED in 18.7s //tensorflow/python/training:input_test PASSED in 35.9s //tensorflow/python/training:momentum_test_cpu PASSED in 21.7s //tensorflow/python/training:monitored_session_test PASSED in 39.9s //tensorflow/python/training:moving_averages_test_cpu PASSED in 23.0s //tensorflow/python/training:optimizer_test_cpu PASSED in 25.7s //tensorflow/python/training:proximal_adagrad_test_cpu PASSED in 17.3s //tensorflow/python/training:proximal_gradient_descent_test_cpu PASSED in 18.4s //tensorflow/python/training:quantize_training_test_cpu PASSED in 11.0s //tensorflow/python/training:queue_runner_test_cpu PASSED in 19.8s //tensorflow/python/training:rmsprop_test_cpu PASSED in 35.2s //tensorflow/python/training:saver_large_partitioned_variable_test PASSED in 17.5s //tensorflow/python/training:saver_test_2gpu PASSED in 44.8s //tensorflow/python/training:saver_test_cpu PASSED in 49.5s //tensorflow/python/training:server_lib_multiple_containers_test PASSED in 11.6s //tensorflow/python/training:server_lib_same_variables_clear_container_test PASSED in 14.4s //tensorflow/python/training:server_lib_same_variables_clear_test PASSED in 14.4s //tensorflow/python/training:server_lib_same_variables_no_clear_test PASSED in 15.6s //tensorflow/python/training:server_lib_sparse_job_test PASSED in 12.7s //tensorflow/python/training:server_lib_test PASSED in 25.4s //tensorflow/python/training:session_manager_test_cpu PASSED in 85.0s //tensorflow/python/training:slot_creator_test_cpu PASSED in 16.5s //tensorflow/python/training:supervisor_test PASSED in 25.8s //tensorflow/python/training:training_ops_mlir_test_cpu PASSED in 18.2s //tensorflow/python/training:training_ops_test_cpu PASSED in 12.7s //tensorflow/python/training:training_util_test PASSED in 16.9s //tensorflow/python/training:warm_starting_util_test PASSED in 32.8s //tensorflow/python/training/experimental:loss_scale_optimizer_test PASSED in 25.7s //tensorflow/python/training/experimental:loss_scale_test PASSED in 37.6s //tensorflow/python/training/experimental:mixed_precision_test_cpu PASSED in 16.4s //tensorflow/python/training/saving:saveable_object_util_test PASSED in 12.6s //tensorflow/python/util:compat_test PASSED in 12.1s //tensorflow/python/util:decorator_utils_test PASSED in 11.1s //tensorflow/python/util:deprecation_test PASSED in 13.6s //tensorflow/python/util:dispatch_test PASSED in 15.1s //tensorflow/python/util:example_parser_configuration_test PASSED in 13.7s //tensorflow/python/util:fast_module_type_test PASSED in 12.7s //tensorflow/python/util:function_parameter_canonicalizer_test PASSED in 12.5s //tensorflow/python/util:function_utils_test PASSED in 11.8s //tensorflow/python/util:keyword_args_test PASSED in 12.9s //tensorflow/python/util:lazy_loader_test PASSED in 13.2s //tensorflow/python/util:lock_util_test PASSED in 13.2s //tensorflow/python/util:module_wrapper_test PASSED in 14.7s //tensorflow/python/util:nest_test PASSED in 30.4s //tensorflow/python/util:object_identity_test PASSED in 12.4s //tensorflow/python/util:pywrap_xla_ops_test PASSED in 4.0s //tensorflow/python/util:serialization_test PASSED in 10.9s //tensorflow/python/util:tf_contextlib_test PASSED in 10.8s //tensorflow/python/util:tf_decorator_test PASSED in 10.3s //tensorflow/python/util:tf_export_test PASSED in 12.4s //tensorflow/python/util:tf_inspect_test PASSED in 11.3s //tensorflow/python/util:tf_should_use_test PASSED in 11.9s //tensorflow/python/util:tf_stack_test PASSED in 11.3s //tensorflow/python/util:traceback_utils_test PASSED in 10.7s //tensorflow/python/util:type_annotations_test PASSED in 10.9s //tensorflow/python/util:variable_utils_test PASSED in 11.4s //tensorflow/python/util:vlog_test PASSED in 11.3s //tensorflow/python/util/protobuf:protobuf_compare_test PASSED in 5.2s //tensorflow/tools/api/tests:module_test PASSED in 24.5s //tensorflow/tools/benchmark:benchmark_model_test PASSED in 2.9s //tensorflow/tools/common:public_api_test PASSED in 3.4s //tensorflow/tools/common:traverse_test PASSED in 3.3s //tensorflow/tools/compatibility:all_renames_v2_test PASSED in 10.8s //tensorflow/tools/compatibility:ast_edits_test PASSED in 10.5s //tensorflow/tools/compatibility:test_file_v1_0 PASSED in 23.3s //tensorflow/tools/compatibility:test_file_v2_0 PASSED in 24.5s //tensorflow/tools/compatibility:tf_upgrade_test PASSED in 10.6s //tensorflow/tools/compatibility:tf_upgrade_v2_safety_test PASSED in 9.9s //tensorflow/tools/docs:tf_doctest_test PASSED in 1.9s //tensorflow/tools/graph_transforms:file_utils_test PASSED in 0.7s //tensorflow/tools/graph_transforms:transform_graph_test PASSED in 3.0s //tensorflow/tools/graph_transforms:transform_utils_test PASSED in 2.9s //tensorflow/tools/graph_transforms:transforms_test PASSED in 4.7s //tensorflow/tools/proto_splitter:merge_test PASSED in 0.3s //tensorflow/tools/proto_splitter:split_graph_def_test PASSED in 10.7s //tensorflow/tools/proto_splitter:split_test PASSED in 10.8s //tensorflow/tools/proto_splitter:util_test PASSED in 10.3s //tensorflow/tools/proto_splitter/cc:composable_splitter_test PASSED in 0.3s //tensorflow/tools/proto_splitter/cc:graph_def_splitter_test PASSED in 0.3s //tensorflow/tools/proto_splitter/cc:saved_model_splitter_test PASSED in 0.2s //tensorflow/tools/proto_splitter/cc:util_test PASSED in 2.7s //tensorflow/tools/proto_splitter/python:saved_model_test PASSED in 11.1s //tensorflow/tools/proto_splitter/python:test_util_test PASSED in 10.4s //tensorflow/tools/proto_text:gen_proto_text_functions_lib_test PASSED in 0.4s //tensorflow/tools/tensorflow_builder/compat_checker:compat_checker_test PASSED in 0.5s //tensorflow/compiler/tests:complex_div_test_cpu PASSED in 27.0s Stats over 2 runs: max = 27.0s, min = 11.9s, avg = 19.4s, dev = 7.5s //tensorflow/compiler/tests:complex_div_test_cpu_mlir_bridge_test PASSED in 32.1s Stats over 2 runs: max = 32.1s, min = 10.2s, avg = 21.2s, dev = 11.0s //tensorflow/python/data/experimental/kernel_tests/optimization:optimization_test PASSED in 128.0s Stats over 2 runs: max = 128.0s, min = 18.9s, avg = 73.5s, dev = 54.5s //tensorflow/python/data/experimental/kernel_tests/service:metadata_test PASSED in 105.3s Stats over 2 runs: max = 105.3s, min = 20.8s, avg = 63.1s, dev = 42.2s //tensorflow/python/data/kernel_tests:padded_batch_test PASSED in 161.6s Stats over 2 runs: max = 161.6s, min = 72.4s, avg = 117.0s, dev = 44.6s //tensorflow/python/data/kernel_tests:repeat_test PASSED in 448.7s Stats over 2 runs: max = 448.7s, min = 357.3s, avg = 403.0s, dev = 45.7s //tensorflow/python/data/kernel_tests:window_test PASSED in 128.1s Stats over 2 runs: max = 128.1s, min = 94.2s, avg = 111.1s, dev = 17.0s //tensorflow/python/kernel_tests/array_ops:scatter_nd_ops_test_cpu PASSED in 58.7s Stats over 2 runs: max = 58.7s, min = 12.8s, avg = 35.7s, dev = 22.9s //tensorflow/python/kernel_tests/control_flow:functional_ops_test_cpu PASSED in 98.7s Stats over 2 runs: max = 98.7s, min = 22.9s, avg = 60.8s, dev = 37.9s //tensorflow/python/kernel_tests/control_flow:map_fn_test_cpu PASSED in 71.8s Stats over 2 runs: max = 71.8s, min = 20.5s, avg = 46.1s, dev = 25.7s //tensorflow/python/kernel_tests/nn_ops:atrous_conv2d_test_cpu PASSED in 39.2s Stats over 2 runs: max = 39.2s, min = 34.3s, avg = 36.8s, dev = 2.4s //tensorflow/python/kernel_tests/nn_ops:bias_op_d9m_test_cpu PASSED in 197.6s Stats over 2 runs: max = 197.6s, min = 67.5s, avg = 132.6s, dev = 65.1s //tensorflow/python/kernel_tests/nn_ops:conv2d_backprop_filter_grad_test_cpu PASSED in 11.8s Stats over 2 runs: max = 11.8s, min = 4.1s, avg = 8.0s, dev = 3.9s //tensorflow/python/kernel_tests/signal:fft_ops_test_cpu PASSED in 223.4s Stats over 2 runs: max = 223.4s, min = 118.4s, avg = 170.9s, dev = 52.5s //tensorflow/python/ops:control_flow_ops_test_cpu PASSED in 100.3s Stats over 2 runs: max = 100.3s, min = 28.4s, avg = 64.4s, dev = 36.0s //tensorflow/compiler/tests:spacetobatch_op_test_cpu PASSED in 117.8s Stats over 3 runs: max = 117.8s, min = 18.4s, avg = 52.6s, dev = 46.2s //tensorflow/compiler/tests:spacetobatch_op_test_cpu_mlir_bridge_test PASSED in 68.8s Stats over 3 runs: max = 68.8s, min = 16.4s, avg = 37.4s, dev = 22.6s //tensorflow/core/data/service:thread_safe_buffer_test PASSED in 1.0s Stats over 3 runs: max = 1.0s, min = 0.9s, avg = 1.0s, dev = 0.0s //tensorflow/python/data/experimental/kernel_tests/service:multi_process_cluster_test PASSED in 57.7s Stats over 3 runs: max = 57.7s, min = 16.2s, avg = 34.1s, dev = 17.4s //tensorflow/python/data/kernel_tests:unique_test PASSED in 42.9s Stats over 3 runs: max = 42.9s, min = 20.9s, avg = 32.3s, dev = 9.0s //tensorflow/python/distribute/coordinator:metric_utils_test PASSED in 87.5s Stats over 3 runs: max = 87.5s, min = 15.0s, avg = 47.1s, dev = 30.2s //tensorflow/python/kernel_tests/array_ops:gather_op_test_cpu PASSED in 106.3s Stats over 3 runs: max = 106.3s, min = 41.1s, avg = 63.7s, dev = 30.2s //tensorflow/python/kernel_tests/array_ops:weights_broadcast_test PASSED in 64.1s Stats over 3 runs: max = 64.1s, min = 9.5s, avg = 27.9s, dev = 25.6s //tensorflow/python/kernel_tests/distributions:util_test_cpu PASSED in 66.9s Stats over 3 runs: max = 66.9s, min = 17.8s, avg = 38.3s, dev = 20.8s //tensorflow/python/kernel_tests/linalg:matrix_triangular_solve_op_test_cpu PASSED in 812.9s Stats over 3 runs: max = 812.9s, min = 49.2s, avg = 306.5s, dev = 358.1s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_grad_test_cpu PASSED in 33.8s Stats over 3 runs: max = 33.8s, min = 5.7s, avg = 16.2s, dev = 12.5s //tensorflow/python/kernel_tests/random:multinomial_op_big_test_cpu PASSED in 71.1s Stats over 3 runs: max = 71.1s, min = 7.7s, avg = 30.5s, dev = 28.8s //tensorflow/python/eager:small_constants_optimizer_test_cpu FAILED in 3 out of 3 in 361.7s Stats over 3 runs: max = 361.7s, min = 299.6s, avg = 340.5s, dev = 28.9s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/eager/small_constants_optimizer_test_cpu/test.log /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/eager/small_constants_optimizer_test_cpu/test_attempts/attempt_1.log /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/eager/small_constants_optimizer_test_cpu/test_attempts/attempt_2.log //tensorflow/core/kernels:example_parsing_ops_test PASSED in 1.9s Stats over 4 runs: max = 1.9s, min = 0.7s, avg = 1.1s, dev = 0.4s //tensorflow/dtensor/python/tests:batchparallel_spmd_test_cpu PASSED in 72.7s Stats over 4 runs: max = 72.7s, min = 21.0s, avg = 34.9s, dev = 21.8s //tensorflow/dtensor/python/tests:conv_test_cpu PASSED in 47.8s Stats over 4 runs: max = 47.8s, min = 13.7s, avg = 23.0s, dev = 14.3s //tensorflow/dtensor/python/tests:sparse_test_cpu PASSED in 99.8s Stats over 4 runs: max = 99.8s, min = 13.6s, avg = 36.3s, dev = 36.7s //tensorflow/python/data/experimental/kernel_tests:auto_shard_dataset_test PASSED in 176.4s Stats over 4 runs: max = 176.4s, min = 65.2s, avg = 110.2s, dev = 42.6s //tensorflow/python/data/experimental/kernel_tests:from_list_test PASSED in 203.6s Stats over 4 runs: max = 203.6s, min = 79.0s, avg = 150.6s, dev = 46.0s //tensorflow/python/data/experimental/kernel_tests:map_and_batch_test PASSED in 107.2s Stats over 4 runs: max = 107.2s, min = 79.8s, avg = 97.3s, dev = 11.2s //tensorflow/python/data/experimental/kernel_tests:parse_example_dataset_test PASSED in 82.3s Stats over 4 runs: max = 82.3s, min = 31.1s, avg = 58.4s, dev = 19.8s //tensorflow/python/data/experimental/kernel_tests:rebatch_dataset_test PASSED in 59.0s Stats over 4 runs: max = 59.0s, min = 13.5s, avg = 37.5s, dev = 18.1s //tensorflow/python/data/experimental/kernel_tests:sql_dataset_test PASSED in 145.2s Stats over 4 runs: max = 145.2s, min = 55.0s, avg = 88.5s, dev = 34.3s //tensorflow/python/data/experimental/kernel_tests/service:cross_trainer_cache_ft_test PASSED in 90.4s Stats over 4 runs: max = 90.4s, min = 7.4s, avg = 29.7s, dev = 35.1s //tensorflow/python/data/kernel_tests:fixed_length_record_dataset_test PASSED in 102.7s Stats over 4 runs: max = 102.7s, min = 10.6s, avg = 47.7s, dev = 34.1s //tensorflow/python/data/kernel_tests:from_generator_test PASSED in 128.9s Stats over 4 runs: max = 128.9s, min = 26.5s, avg = 55.8s, dev = 42.5s //tensorflow/python/data/kernel_tests:from_tensor_slices_test PASSED in 224.1s Stats over 4 runs: max = 224.1s, min = 95.0s, avg = 165.5s, dev = 52.2s //tensorflow/python/data/kernel_tests:from_tensors_test PASSED in 222.7s Stats over 4 runs: max = 222.7s, min = 101.8s, avg = 169.7s, dev = 43.9s //tensorflow/python/data/kernel_tests:group_by_window_test PASSED in 109.0s Stats over 4 runs: max = 109.0s, min = 24.6s, avg = 52.4s, dev = 33.3s //tensorflow/python/data/kernel_tests:list_files_test PASSED in 220.4s Stats over 4 runs: max = 220.4s, min = 107.2s, avg = 172.5s, dev = 41.5s //tensorflow/python/data/kernel_tests:ragged_batch_test PASSED in 74.0s Stats over 4 runs: max = 74.0s, min = 28.7s, avg = 45.7s, dev = 17.0s //tensorflow/python/data/kernel_tests:take_test PASSED in 423.7s Stats over 4 runs: max = 423.7s, min = 162.9s, avg = 282.2s, dev = 102.1s //tensorflow/python/data/kernel_tests:take_while_test PASSED in 162.0s Stats over 4 runs: max = 162.0s, min = 98.6s, avg = 131.4s, dev = 22.9s //tensorflow/python/data/kernel_tests:text_line_dataset_test PASSED in 163.7s Stats over 4 runs: max = 163.7s, min = 81.5s, avg = 107.5s, dev = 33.3s //tensorflow/python/data/kernel_tests:zip_test PASSED in 57.5s Stats over 4 runs: max = 57.5s, min = 26.0s, avg = 44.4s, dev = 11.7s //tensorflow/python/debug/lib:dumping_callback_test_cpu PASSED in 74.9s Stats over 4 runs: max = 74.9s, min = 29.6s, avg = 45.4s, dev = 17.5s //tensorflow/python/distribute:cross_device_ops_test_cpu PASSED in 86.7s Stats over 4 runs: max = 86.7s, min = 34.7s, avg = 52.4s, dev = 20.5s //tensorflow/python/framework:convert_to_constants_test PASSED in 132.9s Stats over 4 runs: max = 132.9s, min = 41.3s, avg = 68.4s, dev = 37.6s //tensorflow/python/kernel_tests:collective_ops_test_cpu PASSED in 135.4s Stats over 4 runs: max = 135.4s, min = 43.3s, avg = 73.7s, dev = 36.3s //tensorflow/python/kernel_tests/array_ops:concat_op_test_cpu PASSED in 58.0s Stats over 4 runs: max = 58.0s, min = 12.8s, avg = 28.6s, dev = 17.4s //tensorflow/python/kernel_tests/array_ops:init_ops_test_cpu PASSED in 111.3s Stats over 4 runs: max = 111.3s, min = 20.2s, avg = 68.9s, dev = 36.6s //tensorflow/python/kernel_tests/array_ops:split_op_test_cpu PASSED in 122.4s Stats over 4 runs: max = 122.4s, min = 10.4s, avg = 47.3s, dev = 45.5s //tensorflow/python/kernel_tests/linalg:einsum_op_test_cpu PASSED in 192.0s Stats over 4 runs: max = 192.0s, min = 22.3s, avg = 100.9s, dev = 63.7s //tensorflow/python/kernel_tests/linalg:linear_operator_lower_triangular_test_cpu PASSED in 201.3s Stats over 4 runs: max = 201.3s, min = 64.4s, avg = 120.2s, dev = 53.9s //tensorflow/python/kernel_tests/nn_ops:conv_ops_test_cpu PASSED in 44.9s Stats over 4 runs: max = 44.9s, min = 31.9s, avg = 37.9s, dev = 5.2s //tensorflow/python/kernel_tests/random:random_gamma_test_cpu PASSED in 181.8s Stats over 4 runs: max = 181.8s, min = 8.3s, avg = 86.4s, dev = 78.9s //tensorflow/python/kernel_tests/signal:window_ops_test_cpu PASSED in 110.8s Stats over 4 runs: max = 110.8s, min = 21.6s, avg = 48.2s, dev = 36.6s //tensorflow/python/ops:nn_batchnorm_test_cpu PASSED in 69.5s Stats over 4 runs: max = 69.5s, min = 13.3s, avg = 29.2s, dev = 23.4s //tensorflow/python/ops:nn_fused_batchnorm_d9m_test_cpu PASSED in 89.8s Stats over 4 runs: max = 89.8s, min = 18.1s, avg = 36.3s, dev = 30.9s //tensorflow/python/ops/ragged:ragged_gather_op_test PASSED in 83.2s Stats over 4 runs: max = 83.2s, min = 36.4s, avg = 50.9s, dev = 18.8s //tensorflow/python/ops/ragged:ragged_getitem_test PASSED in 73.1s Stats over 4 runs: max = 73.1s, min = 48.4s, avg = 55.3s, dev = 10.3s //tensorflow/compiler/tests:conv3d_test_cpu PASSED in 55.4s Stats over 5 runs: max = 55.4s, min = 12.1s, avg = 22.8s, dev = 16.6s //tensorflow/compiler/tests:conv3d_test_cpu_mlir_bridge_test PASSED in 48.8s Stats over 5 runs: max = 48.8s, min = 11.6s, avg = 23.2s, dev = 14.6s //tensorflow/compiler/tests:depthwise_conv_op_test_cpu PASSED in 120.8s Stats over 5 runs: max = 120.8s, min = 18.4s, avg = 44.9s, dev = 38.4s //tensorflow/compiler/tests:depthwise_conv_op_test_cpu_mlir_bridge_test PASSED in 74.5s Stats over 5 runs: max = 74.5s, min = 13.9s, avg = 29.8s, dev = 22.6s //tensorflow/compiler/tests:fused_batchnorm_test_cpu PASSED in 47.2s Stats over 5 runs: max = 47.2s, min = 7.9s, avg = 17.6s, dev = 15.0s //tensorflow/compiler/tests:fused_batchnorm_test_cpu_mlir_bridge_test PASSED in 44.3s Stats over 5 runs: max = 44.3s, min = 11.6s, avg = 19.4s, dev = 12.5s //tensorflow/compiler/tests:reduce_ops_test_cpu PASSED in 64.3s Stats over 5 runs: max = 64.3s, min = 21.1s, avg = 32.4s, dev = 16.2s //tensorflow/compiler/tests:reduce_ops_test_cpu_mlir_bridge_test PASSED in 130.8s Stats over 5 runs: max = 130.8s, min = 28.1s, avg = 52.9s, dev = 39.1s //tensorflow/compiler/tests:special_math_test_cpu PASSED in 189.8s Stats over 5 runs: max = 189.8s, min = 52.1s, avg = 90.0s, dev = 51.0s //tensorflow/compiler/tests:special_math_test_cpu_mlir_bridge_test PASSED in 180.2s Stats over 5 runs: max = 180.2s, min = 49.2s, avg = 98.7s, dev = 44.4s //tensorflow/core/grappler/optimizers:constant_folding_test PASSED in 11.9s Stats over 5 runs: max = 11.9s, min = 5.5s, avg = 8.2s, dev = 2.1s //tensorflow/dtensor/python/tests:layout_propagation_test_cpu PASSED in 96.2s Stats over 5 runs: max = 96.2s, min = 8.5s, avg = 27.7s, dev = 34.3s //tensorflow/dtensor/python/tests:multi_mesh_test_cpu PASSED in 50.0s Stats over 5 runs: max = 50.0s, min = 9.0s, avg = 18.7s, dev = 15.7s //tensorflow/python/distribute:mirrored_strategy_test_2gpu PASSED in 52.2s Stats over 5 runs: max = 52.2s, min = 11.5s, avg = 21.8s, dev = 15.3s //tensorflow/python/distribute:mirrored_strategy_test_cpu PASSED in 58.2s Stats over 5 runs: max = 58.2s, min = 13.7s, avg = 23.4s, dev = 17.4s //tensorflow/python/eager:device_placement_test_cpu PASSED in 33.7s Stats over 5 runs: max = 33.7s, min = 10.7s, avg = 16.3s, dev = 8.7s //tensorflow/python/eager:forwardprop_test_cpu PASSED in 216.1s Stats over 5 runs: max = 216.1s, min = 17.4s, avg = 93.4s, dev = 69.3s //tensorflow/python/eager/polymorphic_function:gradients_test_cpu PASSED in 65.8s Stats over 5 runs: max = 65.8s, min = 13.2s, avg = 28.4s, dev = 19.3s //tensorflow/python/grappler:cluster_test_cpu PASSED in 62.7s Stats over 5 runs: max = 62.7s, min = 6.1s, avg = 19.7s, dev = 21.7s //tensorflow/python/kernel_tests/linalg:cholesky_op_test_cpu PASSED in 160.2s Stats over 5 runs: max = 160.2s, min = 50.4s, avg = 87.9s, dev = 42.7s //tensorflow/python/kernel_tests/linalg:linear_operator_adjoint_test_cpu PASSED in 133.1s Stats over 5 runs: max = 133.1s, min = 62.1s, avg = 99.6s, dev = 27.2s //tensorflow/python/kernel_tests/linalg:linear_operator_composition_test_cpu PASSED in 260.2s Stats over 5 runs: max = 260.2s, min = 116.6s, avg = 198.6s, dev = 49.6s //tensorflow/python/kernel_tests/linalg:linear_operator_diag_test_cpu PASSED in 123.7s Stats over 5 runs: max = 123.7s, min = 48.9s, avg = 80.6s, dev = 28.4s //tensorflow/python/kernel_tests/linalg:linear_operator_full_matrix_test_cpu PASSED in 189.7s Stats over 5 runs: max = 189.7s, min = 83.7s, avg = 146.0s, dev = 35.6s //tensorflow/python/kernel_tests/linalg:linear_operator_householder_test_cpu PASSED in 156.0s Stats over 5 runs: max = 156.0s, min = 50.8s, avg = 105.2s, dev = 33.5s //tensorflow/python/kernel_tests/linalg:linear_operator_identity_test_cpu PASSED in 177.8s Stats over 5 runs: max = 177.8s, min = 104.9s, avg = 149.6s, dev = 25.8s //tensorflow/python/kernel_tests/linalg:linear_operator_inversion_test_cpu PASSED in 168.9s Stats over 5 runs: max = 168.9s, min = 47.0s, avg = 98.1s, dev = 41.6s //tensorflow/python/kernel_tests/linalg:linear_operator_permutation_test_cpu PASSED in 134.6s Stats over 5 runs: max = 134.6s, min = 40.9s, avg = 73.6s, dev = 32.6s //tensorflow/python/kernel_tests/linalg:linear_operator_toeplitz_test_cpu PASSED in 176.0s Stats over 5 runs: max = 176.0s, min = 74.2s, avg = 110.2s, dev = 35.8s //tensorflow/python/kernel_tests/linalg:linear_operator_util_test_cpu PASSED in 45.5s Stats over 5 runs: max = 45.5s, min = 8.6s, avg = 19.8s, dev = 13.5s //tensorflow/python/kernel_tests/linalg:linear_operator_zeros_test_cpu PASSED in 122.2s Stats over 5 runs: max = 122.2s, min = 34.7s, avg = 69.3s, dev = 31.4s //tensorflow/python/kernel_tests/linalg:tridiagonal_matmul_op_test_cpu PASSED in 206.9s Stats over 5 runs: max = 206.9s, min = 6.5s, avg = 54.2s, dev = 76.9s //tensorflow/python/kernel_tests/nn_ops:fractional_avg_pool_op_test PASSED in 32.2s Stats over 5 runs: max = 32.2s, min = 6.8s, avg = 16.8s, dev = 10.2s //tensorflow/python/kernel_tests/nn_ops:fractional_max_pool_op_test PASSED in 29.7s Stats over 5 runs: max = 29.7s, min = 6.8s, avg = 16.8s, dev = 8.9s //tensorflow/python/kernel_tests/sparse_ops:sparse_ops_test_cpu PASSED in 56.9s Stats over 5 runs: max = 56.9s, min = 7.9s, avg = 25.8s, dev = 19.2s //tensorflow/python/ops/parallel_for:math_test_cpu PASSED in 139.5s Stats over 5 runs: max = 139.5s, min = 28.0s, avg = 74.8s, dev = 45.5s //tensorflow/compiler/tests:scan_ops_test_cpu PASSED in 84.3s Stats over 6 runs: max = 84.3s, min = 23.4s, avg = 44.4s, dev = 19.2s //tensorflow/compiler/tests:scan_ops_test_cpu_mlir_bridge_test PASSED in 79.2s Stats over 6 runs: max = 79.2s, min = 24.6s, avg = 44.3s, dev = 18.1s //tensorflow/python/data/experimental/kernel_tests:make_batched_features_dataset_test PASSED in 79.0s Stats over 6 runs: max = 79.0s, min = 12.9s, avg = 35.8s, dev = 24.4s //tensorflow/python/kernel_tests/array_ops:diag_op_test_cpu PASSED in 179.3s Stats over 6 runs: max = 179.3s, min = 12.5s, avg = 44.6s, dev = 60.3s //tensorflow/python/kernel_tests/math_ops:reduction_ops_test_cpu PASSED in 102.8s Stats over 6 runs: max = 102.8s, min = 28.7s, avg = 56.8s, dev = 23.4s //tensorflow/python/distribute/experimental/rpc:rpc_ops_test PASSED in 95.1s Stats over 7 runs: max = 95.1s, min = 9.1s, avg = 23.2s, dev = 29.4s //tensorflow/compiler/tests:ftrl_test_cpu PASSED in 57.5s Stats over 8 runs: max = 57.5s, min = 10.1s, avg = 18.8s, dev = 15.0s //tensorflow/compiler/tests:matrix_diag_ops_test_cpu PASSED in 198.5s Stats over 8 runs: max = 198.5s, min = 5.9s, avg = 80.4s, dev = 72.5s //tensorflow/compiler/tests:matrix_diag_ops_test_cpu_mlir_bridge_test PASSED in 210.8s Stats over 8 runs: max = 210.8s, min = 5.4s, avg = 82.8s, dev = 79.5s //tensorflow/compiler/tests:ternary_ops_test_cpu PASSED in 89.4s Stats over 8 runs: max = 89.4s, min = 9.2s, avg = 29.2s, dev = 25.0s //tensorflow/compiler/tests:ternary_ops_test_cpu_mlir_bridge_test PASSED in 65.4s Stats over 8 runs: max = 65.4s, min = 13.0s, avg = 26.2s, dev = 16.9s //tensorflow/dtensor/python/tests:input_util_test PASSED in 76.3s Stats over 8 runs: max = 76.3s, min = 32.0s, avg = 41.0s, dev = 13.6s //tensorflow/dtensor/python/tests:save_restore_v2_test_cpu PASSED in 55.7s Stats over 8 runs: max = 55.7s, min = 14.6s, avg = 28.4s, dev = 15.6s //tensorflow/python/data/experimental/kernel_tests:csv_dataset_test PASSED in 172.5s Stats over 8 runs: max = 172.5s, min = 11.2s, avg = 44.8s, dev = 51.0s //tensorflow/python/data/experimental/kernel_tests:global_shuffle_test PASSED in 141.5s Stats over 8 runs: max = 141.5s, min = 45.4s, avg = 98.4s, dev = 29.4s //tensorflow/python/data/experimental/kernel_tests:parallel_interleave_test PASSED in 103.6s Stats over 8 runs: max = 103.6s, min = 24.8s, avg = 64.4s, dev = 28.3s //tensorflow/python/data/experimental/kernel_tests/service:coordinated_read_ft_test PASSED in 68.8s Stats over 8 runs: max = 68.8s, min = 7.1s, avg = 32.5s, dev = 18.4s //tensorflow/python/data/experimental/kernel_tests/service:coordinated_read_test PASSED in 66.9s Stats over 8 runs: max = 66.9s, min = 8.4s, avg = 23.9s, dev = 18.5s //tensorflow/python/data/experimental/kernel_tests/service:cross_trainer_cache_test PASSED in 115.3s Stats over 8 runs: max = 115.3s, min = 5.7s, avg = 30.0s, dev = 34.3s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_load_ft_test PASSED in 249.1s Stats over 8 runs: max = 249.1s, min = 27.6s, avg = 106.1s, dev = 80.7s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_load_test PASSED in 532.7s Stats over 8 runs: max = 532.7s, min = 83.2s, avg = 241.4s, dev = 152.3s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_test PASSED in 851.5s Stats over 8 runs: max = 851.5s, min = 24.3s, avg = 172.8s, dev = 262.7s //tensorflow/python/data/experimental/kernel_tests/service:fault_tolerance_test PASSED in 70.6s Stats over 8 runs: max = 70.6s, min = 7.8s, avg = 21.3s, dev = 19.4s //tensorflow/python/data/kernel_tests:batch_test PASSED in 198.2s Stats over 8 runs: max = 198.2s, min = 75.8s, avg = 116.2s, dev = 35.3s //tensorflow/python/data/kernel_tests:filter_test PASSED in 189.3s Stats over 8 runs: max = 189.3s, min = 39.4s, avg = 75.7s, dev = 48.3s //tensorflow/python/data/kernel_tests:flat_map_test PASSED in 180.2s Stats over 8 runs: max = 180.2s, min = 42.7s, avg = 89.9s, dev = 48.2s //tensorflow/python/data/kernel_tests:shard_test PASSED in 225.8s Stats over 8 runs: max = 225.8s, min = 58.0s, avg = 153.5s, dev = 50.7s //tensorflow/python/data/kernel_tests:shuffle_test PASSED in 232.1s Stats over 8 runs: max = 232.1s, min = 40.4s, avg = 150.6s, dev = 57.7s //tensorflow/python/data/kernel_tests:skip_test PASSED in 214.1s Stats over 8 runs: max = 214.1s, min = 76.1s, avg = 144.3s, dev = 44.4s //tensorflow/python/data/kernel_tests:tf_record_dataset_test PASSED in 109.6s Stats over 8 runs: max = 109.6s, min = 33.9s, avg = 64.6s, dev = 22.6s //tensorflow/python/distribute/failure_handling:failure_handler_test PASSED in 131.9s Stats over 8 runs: max = 131.9s, min = 43.9s, avg = 87.5s, dev = 28.9s //tensorflow/python/distribute/failure_handling:gce_failure_handler_test PASSED in 148.8s Stats over 8 runs: max = 148.8s, min = 18.7s, avg = 70.7s, dev = 50.0s //tensorflow/python/kernel_tests/linalg:linalg_ops_test_cpu PASSED in 107.6s Stats over 8 runs: max = 107.6s, min = 52.0s, avg = 74.5s, dev = 18.4s //tensorflow/python/kernel_tests/linalg:linear_operator_block_diag_test_cpu PASSED in 284.5s Stats over 8 runs: max = 284.5s, min = 97.5s, avg = 172.4s, dev = 71.0s //tensorflow/python/kernel_tests/linalg:linear_operator_block_lower_triangular_test_cpu PASSED in 221.1s Stats over 8 runs: max = 221.1s, min = 74.1s, avg = 130.7s, dev = 49.7s //tensorflow/python/kernel_tests/nn_ops:depthwise_conv_op_d9m_test_cpu PASSED in 106.5s Stats over 8 runs: max = 106.5s, min = 6.2s, avg = 27.2s, dev = 33.5s //tensorflow/python/kernel_tests/nn_ops:depthwise_conv_op_test_cpu PASSED in 10.4s Stats over 8 runs: max = 10.4s, min = 4.3s, avg = 7.0s, dev = 2.1s //tensorflow/python/ops/ragged:dynamic_ragged_shape_test PASSED in 90.6s Stats over 8 runs: max = 90.6s, min = 32.1s, avg = 42.9s, dev = 18.5s //tensorflow/python/ops/ragged:ragged_tensor_test PASSED in 62.2s Stats over 8 runs: max = 62.2s, min = 9.8s, avg = 21.0s, dev = 16.1s //tensorflow/compiler/tests:conv2d_test_cpu PASSED in 47.6s Stats over 10 runs: max = 47.6s, min = 9.2s, avg = 15.6s, dev = 10.9s //tensorflow/compiler/tests:conv2d_test_cpu_mlir_bridge_test PASSED in 48.6s Stats over 10 runs: max = 48.6s, min = 8.7s, avg = 14.0s, dev = 11.6s //tensorflow/compiler/tests:random_ops_test_cpu PASSED in 37.0s Stats over 10 runs: max = 37.0s, min = 8.4s, avg = 17.3s, dev = 7.5s //tensorflow/compiler/tests:random_ops_test_cpu_mlir_bridge_test PASSED in 101.4s Stats over 10 runs: max = 101.4s, min = 10.6s, avg = 26.3s, dev = 25.4s //tensorflow/compiler/tests:stateful_random_ops_test_cpu PASSED in 67.0s Stats over 10 runs: max = 67.0s, min = 25.1s, avg = 39.1s, dev = 10.4s //tensorflow/compiler/tests:stateful_random_ops_test_cpu_mlir_bridge_test PASSED in 69.7s Stats over 10 runs: max = 69.7s, min = 29.3s, avg = 39.3s, dev = 11.0s //tensorflow/compiler/tests:stateless_random_ops_test_cpu PASSED in 161.4s Stats over 10 runs: max = 161.4s, min = 56.6s, avg = 95.2s, dev = 29.9s //tensorflow/compiler/tests:stateless_random_ops_test_cpu_mlir_bridge_test PASSED in 128.9s Stats over 10 runs: max = 128.9s, min = 54.7s, avg = 88.1s, dev = 27.4s //tensorflow/python/data/kernel_tests:rejection_resample_test PASSED in 86.6s Stats over 10 runs: max = 86.6s, min = 7.2s, avg = 25.3s, dev = 23.8s //tensorflow/python/distribute:input_lib_type_spec_test_2gpu PASSED in 49.6s Stats over 10 runs: max = 49.6s, min = 6.8s, avg = 21.1s, dev = 12.0s //tensorflow/python/distribute:input_lib_type_spec_test_cpu PASSED in 105.3s Stats over 10 runs: max = 105.3s, min = 8.4s, avg = 27.4s, dev = 26.8s //tensorflow/python/framework:function_test_cpu PASSED in 67.7s Stats over 10 runs: max = 67.7s, min = 7.1s, avg = 20.2s, dev = 19.7s //tensorflow/python/kernel_tests/array_ops:array_ops_test_cpu PASSED in 72.6s Stats over 10 runs: max = 72.6s, min = 7.6s, avg = 19.1s, dev = 18.5s //tensorflow/python/kernel_tests/array_ops:inplace_ops_test_cpu PASSED in 81.5s Stats over 10 runs: max = 81.5s, min = 5.4s, avg = 14.7s, dev = 22.3s //tensorflow/python/kernel_tests/data_structures:tensor_array_ops_test_cpu PASSED in 52.4s Stats over 10 runs: max = 52.4s, min = 9.2s, avg = 16.3s, dev = 12.6s //tensorflow/python/kernel_tests/linalg:linear_operator_tridiag_test_cpu PASSED in 232.6s Stats over 10 runs: max = 232.6s, min = 73.3s, avg = 132.4s, dev = 52.2s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_ops_test_cpu PASSED in 100.1s Stats over 10 runs: max = 100.1s, min = 8.1s, avg = 61.6s, dev = 27.9s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_sparse_mat_mul_grad_test_cpu PASSED in 44.7s Stats over 10 runs: max = 44.7s, min = 5.5s, avg = 12.1s, dev = 11.4s //tensorflow/python/kernel_tests/math_ops:cwise_ops_unary_test_cpu PASSED in 26.0s Stats over 10 runs: max = 26.0s, min = 5.5s, avg = 11.1s, dev = 5.5s //tensorflow/python/kernel_tests/math_ops:segment_reduction_ops_test_cpu PASSED in 47.5s Stats over 10 runs: max = 47.5s, min = 5.2s, avg = 24.6s, dev = 15.8s //tensorflow/python/kernel_tests/nn_ops:pooling_ops_test_cpu PASSED in 91.9s Stats over 10 runs: max = 91.9s, min = 5.0s, avg = 22.9s, dev = 26.2s //tensorflow/python/kernel_tests/nn_ops:rnn_test_cpu PASSED in 77.8s Stats over 10 runs: max = 77.8s, min = 6.2s, avg = 16.4s, dev = 20.6s //tensorflow/python/kernel_tests/random:random_index_shuffle_test PASSED in 48.9s Stats over 10 runs: max = 48.9s, min = 7.0s, avg = 13.0s, dev = 12.1s //tensorflow/python/kernel_tests/random:stateless_random_ops_test_cpu PASSED in 224.4s Stats over 10 runs: max = 224.4s, min = 19.4s, avg = 89.2s, dev = 61.9s //tensorflow/python/ops:special_math_ops_test_cpu PASSED in 61.1s Stats over 10 runs: max = 61.1s, min = 5.9s, avg = 18.9s, dev = 17.7s //tensorflow/python/ops:weak_tensor_special_math_ops_test_cpu PASSED in 36.7s Stats over 10 runs: max = 36.7s, min = 5.0s, avg = 10.4s, dev = 8.9s //tensorflow/python/ops/numpy_ops/tests:np_indexing_test PASSED in 201.6s Stats over 10 runs: max = 201.6s, min = 75.4s, avg = 103.8s, dev = 33.9s //tensorflow/python/ops/ragged:ragged_tensor_supported_values_test PASSED in 62.4s Stats over 10 runs: max = 62.4s, min = 12.6s, avg = 20.0s, dev = 14.2s //tensorflow/python/saved_model:load_test_cpu PASSED in 58.8s Stats over 10 runs: max = 58.8s, min = 29.3s, avg = 36.9s, dev = 9.9s //tensorflow/compiler/tests:fft_test_cpu PASSED in 63.9s Stats over 12 runs: max = 63.9s, min = 10.2s, avg = 25.7s, dev = 14.5s //tensorflow/python/data/experimental/kernel_tests:group_by_reducer_test PASSED in 101.2s Stats over 12 runs: max = 101.2s, min = 6.5s, avg = 25.7s, dev = 25.9s //tensorflow/python/data/kernel_tests:choose_from_datasets_test PASSED in 73.5s Stats over 12 runs: max = 73.5s, min = 6.8s, avg = 22.1s, dev = 17.4s //tensorflow/python/data/kernel_tests:memory_cleanup_test_cpu PASSED in 78.0s Stats over 12 runs: max = 78.0s, min = 5.0s, avg = 16.9s, dev = 18.8s //tensorflow/python/distribute:moving_averages_test_2gpu PASSED in 101.7s Stats over 12 runs: max = 101.7s, min = 14.4s, avg = 28.2s, dev = 22.6s //tensorflow/python/distribute:moving_averages_test_cpu PASSED in 78.3s Stats over 12 runs: max = 78.3s, min = 16.8s, avg = 25.7s, dev = 16.1s //tensorflow/python/eager/polymorphic_function:polymorphic_function_test_cpu PASSED in 94.1s Stats over 15 runs: max = 94.1s, min = 15.9s, avg = 29.7s, dev = 17.9s //tensorflow/python/kernel_tests/linalg:linear_operator_low_rank_update_test_cpu PASSED in 290.5s Stats over 15 runs: max = 290.5s, min = 95.5s, avg = 137.3s, dev = 58.5s //tensorflow/python/kernel_tests/nn_ops:rnn_cell_test_cpu PASSED in 112.8s Stats over 15 runs: max = 112.8s, min = 7.8s, avg = 26.7s, dev = 28.4s //tensorflow/python/data/experimental/kernel_tests/service:dynamic_sharding_test PASSED in 40.9s Stats over 16 runs: max = 40.9s, min = 5.6s, avg = 17.2s, dev = 8.8s //tensorflow/python/data/kernel_tests:snapshot_test PASSED in 100.9s Stats over 16 runs: max = 100.9s, min = 30.7s, avg = 54.5s, dev = 21.6s //tensorflow/python/kernel_tests/control_flow:control_flow_ops_py_test_cpu PASSED in 57.7s Stats over 16 runs: max = 57.7s, min = 8.0s, avg = 19.2s, dev = 13.4s //tensorflow/python/kernel_tests/linalg:matrix_exponential_op_test PASSED in 34.2s Stats over 16 runs: max = 34.2s, min = 6.4s, avg = 12.1s, dev = 6.5s //tensorflow/python/kernel_tests/signal:dct_ops_test_cpu PASSED in 116.4s Stats over 16 runs: max = 116.4s, min = 10.1s, avg = 20.3s, dev = 25.0s //tensorflow/python/ops:image_ops_test_cpu PASSED in 42.9s Stats over 16 runs: max = 42.9s, min = 8.6s, avg = 15.4s, dev = 8.1s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_ft_test FLAKY, failed in 1 out of 18 in 901.3s Stats over 18 runs: max = 901.3s, min = 12.4s, avg = 135.6s, dev = 258.1s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test/shard_17_of_17/test_attempts/attempt_1.log //tensorflow/python/data/kernel_tests:map_test PASSED in 225.2s Stats over 19 runs: max = 225.2s, min = 33.1s, avg = 113.8s, dev = 62.9s //tensorflow/compiler/tests:pooling_ops_3d_test_cpu PASSED in 46.2s Stats over 20 runs: max = 46.2s, min = 6.1s, avg = 10.4s, dev = 8.3s //tensorflow/compiler/tests:pooling_ops_3d_test_cpu_mlir_bridge_test PASSED in 33.7s Stats over 20 runs: max = 33.7s, min = 6.0s, avg = 10.6s, dev = 5.9s //tensorflow/compiler/tests:pooling_ops_test_cpu PASSED in 50.8s Stats over 20 runs: max = 50.8s, min = 5.4s, avg = 13.2s, dev = 9.9s //tensorflow/compiler/tests:pooling_ops_test_cpu_mlir_bridge_test PASSED in 42.9s Stats over 20 runs: max = 42.9s, min = 5.1s, avg = 12.2s, dev = 8.4s //tensorflow/compiler/tests:stochastic_cast_op_test_cpu PASSED in 44.5s Stats over 20 runs: max = 44.5s, min = 9.7s, avg = 15.3s, dev = 7.5s //tensorflow/compiler/tests:unary_ops_test_cpu PASSED in 75.6s Stats over 20 runs: max = 75.6s, min = 5.5s, avg = 23.5s, dev = 21.1s //tensorflow/compiler/tests:unary_ops_test_cpu_mlir_bridge_test PASSED in 93.9s Stats over 20 runs: max = 93.9s, min = 6.6s, avg = 26.0s, dev = 23.8s //tensorflow/dtensor/python/tests:rng_test_cpu PASSED in 58.1s Stats over 20 runs: max = 58.1s, min = 8.7s, avg = 15.4s, dev = 10.1s //tensorflow/python/autograph/tests:loop_control_flow_test PASSED in 205.5s Stats over 20 runs: max = 205.5s, min = 17.6s, avg = 32.2s, dev = 40.0s //tensorflow/python/kernel_tests:metrics_test PASSED in 85.8s Stats over 20 runs: max = 85.8s, min = 9.9s, avg = 36.3s, dev = 22.7s //tensorflow/python/kernel_tests/array_ops:matrix_band_part_op_test_cpu PASSED in 81.4s Stats over 20 runs: max = 81.4s, min = 4.6s, avg = 10.8s, dev = 16.3s //tensorflow/python/kernel_tests/data_structures:barrier_ops_test PASSED in 56.9s Stats over 20 runs: max = 56.9s, min = 5.2s, avg = 13.2s, dev = 11.3s //tensorflow/python/kernel_tests/linalg:eig_op_test PASSED in 89.7s Stats over 20 runs: max = 89.7s, min = 4.9s, avg = 25.0s, dev = 24.7s //tensorflow/python/kernel_tests/linalg:linalg_grad_test_cpu PASSED in 177.7s Stats over 20 runs: max = 177.7s, min = 27.9s, avg = 76.5s, dev = 42.0s //tensorflow/python/kernel_tests/linalg:norm_op_test_cpu PASSED in 42.2s Stats over 20 runs: max = 42.2s, min = 6.4s, avg = 11.8s, dev = 7.9s //tensorflow/python/kernel_tests/linalg:normalize_op_test_cpu PASSED in 42.6s Stats over 20 runs: max = 42.6s, min = 8.2s, avg = 18.0s, dev = 8.5s //tensorflow/python/kernel_tests/linalg:qr_op_test_cpu PASSED in 381.1s Stats over 20 runs: max = 381.1s, min = 24.6s, avg = 78.6s, dev = 80.2s //tensorflow/python/kernel_tests/linalg:self_adjoint_eig_op_test_cpu PASSED in 64.0s Stats over 20 runs: max = 64.0s, min = 5.6s, avg = 19.3s, dev = 12.8s //tensorflow/python/kernel_tests/math_ops:batch_matmul_op_test_cpu PASSED in 36.3s Stats over 20 runs: max = 36.3s, min = 6.9s, avg = 18.4s, dev = 8.2s //tensorflow/python/kernel_tests/math_ops:matmul_op_test_cpu PASSED in 39.8s Stats over 20 runs: max = 39.8s, min = 18.3s, avg = 27.9s, dev = 5.8s //tensorflow/python/kernel_tests/math_ops:tensordot_op_test_cpu PASSED in 97.4s Stats over 20 runs: max = 97.4s, min = 9.4s, avg = 41.1s, dev = 26.3s //tensorflow/python/kernel_tests/nn_ops:embedding_ops_test_cpu PASSED in 73.3s Stats over 20 runs: max = 73.3s, min = 14.6s, avg = 22.7s, dev = 12.2s //tensorflow/python/data/kernel_tests:interleave_test PASSED in 158.1s Stats over 24 runs: max = 158.1s, min = 21.7s, avg = 58.9s, dev = 32.9s //tensorflow/python/data/kernel_tests:sample_from_datasets_test PASSED in 59.0s Stats over 24 runs: max = 59.0s, min = 5.3s, avg = 23.8s, dev = 15.0s //tensorflow/dtensor/python/tests:multi_device_spmd_test_cpu PASSED in 130.1s Stats over 25 runs: max = 130.1s, min = 36.7s, avg = 52.1s, dev = 17.8s //tensorflow/python/kernel_tests/nn_ops:conv_ops_3d_test_cpu PASSED in 74.5s Stats over 30 runs: max = 74.5s, min = 4.6s, avg = 17.5s, dev = 15.0s //tensorflow/python/data/experimental/kernel_tests/service:data_service_ops_test PASSED in 117.5s Stats over 32 runs: max = 117.5s, min = 8.1s, avg = 18.8s, dev = 18.7s //tensorflow/python/data/experimental/kernel_tests/service:worker_tags_test PASSED in 63.0s Stats over 32 runs: max = 63.0s, min = 6.2s, avg = 17.4s, dev = 11.2s //tensorflow/python/distribute:multi_process_runner_test_2gpu PASSED in 224.1s Stats over 35 runs: max = 224.1s, min = 6.6s, avg = 30.6s, dev = 42.8s //tensorflow/python/distribute:multi_process_runner_test_cpu PASSED in 238.3s Stats over 35 runs: max = 238.3s, min = 6.8s, avg = 30.6s, dev = 42.6s //tensorflow/core/kernels:stochastic_cast_op_test PASSED in 5.6s Stats over 48 runs: max = 5.6s, min = 0.7s, avg = 1.9s, dev = 1.0s //tensorflow/compiler/mlir/quantization/tensorflow/python:quantize_model_test PASSED in 205.0s Stats over 50 runs: max = 205.0s, min = 37.2s, avg = 82.4s, dev = 40.8s //tensorflow/compiler/tests:sort_ops_test_cpu PASSED in 51.1s Stats over 50 runs: max = 51.1s, min = 4.6s, avg = 24.0s, dev = 11.5s //tensorflow/compiler/tests:sort_ops_test_cpu_mlir_bridge_test PASSED in 48.4s Stats over 50 runs: max = 48.4s, min = 4.9s, avg = 22.7s, dev = 10.9s //tensorflow/python/kernel_tests/linalg:linear_operator_circulant_test_cpu PASSED in 117.9s Stats over 50 runs: max = 117.9s, min = 35.3s, avg = 52.9s, dev = 20.0s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_dense_mat_mul_grad_test_cpu PASSED in 39.1s Stats over 50 runs: max = 39.1s, min = 6.4s, avg = 13.9s, dev = 5.4s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_dense_mat_mul_onednn_grad_test PASSED in 35.0s Stats over 50 runs: max = 35.0s, min = 5.7s, avg = 13.2s, dev = 5.4s //tensorflow/python/kernel_tests/math_ops:cwise_ops_binary_test_cpu PASSED in 46.1s Stats over 50 runs: max = 46.1s, min = 11.5s, avg = 21.6s, dev = 7.1s //tensorflow/python/kernel_tests/math_ops:cwise_ops_test_cpu PASSED in 27.7s Stats over 50 runs: max = 27.7s, min = 3.9s, avg = 7.4s, dev = 4.2s Executed 3076 out of 3076 tests: 3075 tests pass and 1 fails locally. There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.