==================== Test output for //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_ft_test (shard 2 of 17): 2024-04-25 06:08:10.943298: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.6: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/python_aarch64-unknown-linux-gnu/bin/python3 [ RUN ] SnapshotFtTest.testDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_1_numworkers_3 [ SKIPPED ] SnapshotFtTest.testDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_1_numworkers_3 [ RUN ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_graph_tfapiversion_1_numsources_3_numworkers_1 [ SKIPPED ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_graph_tfapiversion_1_numsources_3_numworkers_1 [ RUN ] SnapshotFtTest.testNestedDataset_test_mode_eager_tfapiversion_2_numworkers_3 2024-04-25 06:08:14.642535: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmppxlnnpqp/tf_data_dispatcher_journal 2024-04-25 06:08:14.642623: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:08:14.643490: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmppxlnnpqp" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:08:14.643523: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45181 2024-04-25 06:08:14.686323: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:08:14.686870: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45181. Worker config: protocol: "grpc" dispatcher_address: "localhost:45181" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:08:14.687071: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:46717 2024-04-25 06:08:14.700934: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45181. Worker config: protocol: "grpc" dispatcher_address: "localhost:45181" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:08:14.701154: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:44823 2024-04-25 06:08:14.703474: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45181. Worker config: protocol: "grpc" dispatcher_address: "localhost:45181" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:08:14.703659: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:36807 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1714025295.366282 2888517 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot I0000 00:00:1714025295.749320 2888517 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot I0000 00:00:1714025295.750602 2889064 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, created stream_0 and assigned to localhost:36807 2024-04-25 06:08:15.751525: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:15.751554: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:46717 due to timeout 2024-04-25 06:08:15.751569: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:44823 due to timeout I0000 00:00:1714025295.755817 2889025 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, created stream_1 and assigned to localhost:44823 2024-04-25 06:08:15.759145: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 46717 I0000 00:00:1714025295.755848 2885580 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, created stream_2 and assigned to localhost:46717 2024-04-25 06:08:16.105301: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45181 2024-04-25 06:08:16.106082: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmppxlnnpqp/tf_data_dispatcher_journal 2024-04-25 06:08:16.106373: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 80us. 2024-04-25 06:08:16.115259: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:08:16.115344: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:08:16.115877: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream 0, chunk 0. 2024-04-25 06:08:16.115991: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream 1, chunk 0. I0000 00:00:1714025296.129868 2891274 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot 2024-04-25 06:08:16.130122: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 45181 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmppxlnnpqp" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:08:16.130212: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45181 2024-04-25 06:08:16.130227: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:16.135464: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:08:16.135973: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream 2, chunk 0. 2024-04-25 06:08:16.180313: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:08:16.188700: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714025296.207857 2891443 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__61a0dc47085ad745_ldcg-aarch64-02-34e5fa53-2867224-616e59ce987e0.tfrecord*. 2024-04-25 06:08:17.130364: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025297.545202 2891442 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b36328c355134836_ldcg-aarch64-02-63a990ae-2867224-616e59ce98809.tfrecord*. 2024-04-25 06:08:18.265046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025298.986077 2891406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7e1b28e5a010b6ff_ldcg-aarch64-02-a77736fb-2867224-616e59ce9398f.tfrecord*. 2024-04-25 06:08:19.265219: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025299.986348 2891407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ebb3a3f988411444_ldcg-aarch64-02-369f90ce-2867224-616e59ce93992.tfrecord*. 2024-04-25 06:08:20.275054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:21.285145: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:22.295049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025302.961142 2891410 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11312a69f6a2624b_ldcg-aarch64-02-cbe9d2cd-2867224-616e59ce93a1e.tfrecord*. 2024-04-25 06:08:23.305068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:24.415050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:25.415233: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:26.425053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025306.452954 2891442 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b36328c355134836_ldcg-aarch64-02-63a990ae-2867224-616e59ce98809.tfrecord*. 2024-04-25 06:08:27.435510: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:28.445057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025308.486306 2891406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7e1b28e5a010b6ff_ldcg-aarch64-02-a77736fb-2867224-616e59ce9398f.tfrecord*. 2024-04-25 06:08:29.555284: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025310.172052 2891406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7e1b28e5a010b6ff_ldcg-aarch64-02-a77736fb-2867224-616e59ce9398f.tfrecord*. 2024-04-25 06:08:30.565051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025311.205216 2891442 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b36328c355134836_ldcg-aarch64-02-63a990ae-2867224-616e59ce98809.tfrecord*. 2024-04-25 06:08:31.595038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:32.605059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025312.686737 2891443 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__61a0dc47085ad745_ldcg-aarch64-02-34e5fa53-2867224-616e59ce987e0.tfrecord*. 2024-04-25 06:08:33.615055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025314.027803 2891443 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__61a0dc47085ad745_ldcg-aarch64-02-34e5fa53-2867224-616e59ce987e0.tfrecord*. 2024-04-25 06:08:34.617338: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025315.056976 2891406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7e1b28e5a010b6ff_ldcg-aarch64-02-a77736fb-2867224-616e59ce9398f.tfrecord*. 2024-04-25 06:08:35.625042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:36.627767: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025316.709364 2891406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7e1b28e5a010b6ff_ldcg-aarch64-02-a77736fb-2867224-616e59ce9398f.tfrecord*. 2024-04-25 06:08:37.646203: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:38.655053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:39.655248: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025320.006505 2891443 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__61a0dc47085ad745_ldcg-aarch64-02-34e5fa53-2867224-616e59ce987e0.tfrecord*. 2024-04-25 06:08:40.656670: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025321.625500 2891442 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b36328c355134836_ldcg-aarch64-02-63a990ae-2867224-616e59ce98809.tfrecord*. 2024-04-25 06:08:41.662224: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025322.625877 2891442 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b36328c355134836_ldcg-aarch64-02-63a990ae-2867224-616e59ce98809.tfrecord*. 2024-04-25 06:08:42.675049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:42.682573: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 2, compression: SNAPPY }. Stream 2, chunk 0, number of elements in chunk: 1484, chunk size: 14.4922KB. 2024-04-25 06:08:42.683447: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/checkpoints/checkpoint_2_1484. Checkpointing distributed tf.data snapshot writer took 833us 2024-04-25 06:08:42.798895: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2027, chunk size: 19.7949KB. 2024-04-25 06:08:42.799636: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_2_2027. Checkpointing distributed tf.data snapshot writer took 682us 2024-04-25 06:08:42.844888: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 1, compression: SNAPPY }. Stream 1, chunk 0, number of elements in chunk: 1439, chunk size: 14.0527KB. 2024-04-25 06:08:42.845738: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_1/checkpoints/checkpoint_2_1439. Checkpointing distributed tf.data snapshot writer took 789us 2024-04-25 06:08:43.680621: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:44.695058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:45.705063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:46.627576: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_2/checkpoints 2024-04-25 06:08:46.627915: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:08:46.708667: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:46.790051: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:113] Distributed tf.data snapshot stream has already been completed for SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:08:46.790195: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45181. Worker config: port: 46717 protocol: "grpc" dispatcher_address: "localhost:45181" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:08:46.790362: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:46717 2024-04-25 06:08:46.855969: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:08:46.856359: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:08:46.875313: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:08:46.957335: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 44823 2024-04-25 06:08:46.985236: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45181 2024-04-25 06:08:46.986062: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmppxlnnpqp/tf_data_dispatcher_journal 2024-04-25 06:08:46.986368: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 192us. 2024-04-25 06:08:47.006337: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:08:47.027544: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714025327.164708 2988109 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot 2024-04-25 06:08:47.165245: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 45181 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmppxlnnpqp" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:08:47.165333: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45181 2024-04-25 06:08:47.165348: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:47.312827: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45181. Worker config: port: 44823 protocol: "grpc" dispatcher_address: "localhost:45181" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:08:47.313078: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:44823 2024-04-25 06:08:47.313496: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 36807 2024-04-25 06:08:47.315108: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 1, compression: SNAPPY } I0000 00:00:1714025327.315839 2988969 snapshot_split_provider.cc:252] Restored snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\003" compression: "SNAPPY" }, next split 29, repetition 0. 2024-04-25 06:08:47.356765: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45181 2024-04-25 06:08:47.357545: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmppxlnnpqp/tf_data_dispatcher_journal 2024-04-25 06:08:47.357879: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 201us. 2024-04-25 06:08:47.365341: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:375] Restored distributed tf.data snapshot writer. Snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream 1, chunk 2. 2024-04-25 06:08:47.365478: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream 1, chunk 2. 2024-04-25 06:08:47.374369: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:08:47.435274: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 1, compression: SNAPPY }. Stream 1, chunk 2, number of elements in chunk: 0, chunk size: 0B. 2024-04-25 06:08:47.435706: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:08:47.436048: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_1/checkpoints/checkpoint_2_0. Checkpointing distributed tf.data snapshot writer took 691us 2024-04-25 06:08:47.436349: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot/streams/stream_1/checkpoints 2024-04-25 06:08:47.436820: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot, stream: 1, compression: SNAPPY } I0000 00:00:1714025328.031163 2989093 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot 2024-04-25 06:08:48.035114: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 45181 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmppxlnnpqp" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:08:48.035246: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45181 2024-04-25 06:08:48.035377: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:48.039311: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45181. Worker config: port: 36807 protocol: "grpc" dispatcher_address: "localhost:45181" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:08:48.039491: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:36807 I0000 00:00:1714025328.305216 2989213 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpsqhhovft/tmpvj87y97b/tf_data_snapshot 2024-04-25 06:08:48.346179: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 36807 2024-04-25 06:08:48.356508: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 44823 2024-04-25 06:08:48.357843: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 46717 2024-04-25 06:08:48.360341: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45181 [ OK ] SnapshotFtTest.testNestedDataset_test_mode_eager_tfapiversion_2_numworkers_3 [ RUN ] SnapshotFtTest.testNonrepeatedDatasetDoesntProduceSecondRepetitionDir_test_mode_graph_tfapiversion_2_numsources_1_numworkers_1 2024-04-25 06:08:49.751143: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpskrupvmo/tf_data_dispatcher_journal 2024-04-25 06:08:49.751222: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:08:49.751473: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpskrupvmo" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:08:49.751509: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42657 2024-04-25 06:08:49.751524: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:08:49.754272: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42657. Worker config: protocol: "grpc" dispatcher_address: "localhost:42657" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:08:49.754461: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:37207 WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. W0425 06:08:49.763645 281473505850400 deprecation.py:50] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. 2024-04-25 06:08:49.769488: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled I0000 00:00:1714025329.781253 2994165 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot I0000 00:00:1714025329.811452 2994165 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot 2024-04-25 06:08:49.813115: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 37207 2024-04-25 06:08:49.855906: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42657 2024-04-25 06:08:49.856660: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpskrupvmo/tf_data_dispatcher_journal 2024-04-25 06:08:49.856853: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 72us. I0000 00:00:1714025330.185858 2994490 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot 2024-04-25 06:08:50.188886: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 42657 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpskrupvmo" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:08:50.188977: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42657 2024-04-25 06:08:50.195050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025330.764944 2995196 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot, created stream_0 and assigned to localhost:37207 2024-04-25 06:08:50.780832: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:08:50.781307: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot, stream 0, chunk 0. 2024-04-25 06:08:50.781809: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42657. Worker config: port: 37207 protocol: "grpc" dispatcher_address: "localhost:42657" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:08:50.781997: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:37207 2024-04-25 06:08:51.205054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025332.050680 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:08:52.215053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:53.215579: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025333.625165 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:08:54.305115: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025335.078661 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:08:55.315060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:56.325057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:57.325439: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:58.325709: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025338.617607 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:08:59.327461: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:00.335088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:01.345095: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025342.256074 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:02.365080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:03.371782: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025344.205118 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:04.375143: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:05.385083: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025345.666286 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:06.393292: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:07.393475: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025347.554501 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:08.395082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025348.938358 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:09.395587: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025350.385546 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:10.415094: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:11.425084: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025352.207559 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:12.435049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025353.297274 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:13.435203: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:14.445055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025354.945676 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:15.495077: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025356.351140 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:16.495249: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025357.425780 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:17.525096: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:18.525321: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025358.743812 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:19.525504: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:20.535057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025361.428154 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:21.545059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:22.565056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025363.035025 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:23.575045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025364.083712 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:24.585072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:25.585268: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025365.738121 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:26.605056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025366.865520 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:27.606535: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025367.977360 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:28.608863: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025369.003854 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:29.609030: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025370.368463 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:30.609206: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:31.609390: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025372.222632 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:32.615049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025373.509520 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:33.615220: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:34.625089: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025374.735628 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:35.625284: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025375.938306 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:36.635147: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025377.165979 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:37.645047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025378.640343 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:38.645222: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:39.655052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025380.385074 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:40.655234: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025381.615100 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:41.665043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:42.665219: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025383.145067 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:43.675057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025384.277570 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:44.685056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025385.329066 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:45.695070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025386.485025 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:46.706615: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025387.665029 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:47.706872: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:48.707068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025388.826022 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:49.710483: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025389.878822 3127475 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot]: 0/1 streams completed; 376/1000 splits assigned or completed. I0000 00:00:1714025390.075560 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:50.715062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025391.142795 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:51.720576: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025392.395036 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:52.720753: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025393.405934 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:53.725060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:54.755055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025394.805103 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:09:55.765062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025396.024284 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:56.765404: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:57.765652: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025397.925912 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:58.765958: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025399.385133 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:09:59.766135: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:00.775070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025400.982982 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:01.775967: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025402.516943 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:02.785054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:03.795058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025404.037434 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:04.805046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025405.065067 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:05.885056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025406.338136 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:06.885240: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:07.895052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025408.155012 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:08.905053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025409.330359 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:09.915057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:10.922458: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025411.866190 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:11.922654: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:12.925054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025413.345378 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:13.945057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:14.955060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:15.965072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:16.965268: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:17.967503: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:18.975067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:19.985854: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025420.035561 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:21.005080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:22.015069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:23.021269: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:24.021486: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:25.025052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025425.086783 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:26.029397: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:27.029616: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:28.031906: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025428.075839 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:29.032087: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:30.035060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025430.285552 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:31.035396: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025431.345095 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:32.042596: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025432.415430 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:33.075058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:34.085054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025434.087120 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:35.112450: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025435.166915 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:36.115059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:37.125097: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025437.135296 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:38.133364: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025438.856369 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:39.133651: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025440.095121 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:40.133828: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:41.135056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:42.145058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025442.195121 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:43.185694: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025443.827176 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:44.187094: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:45.195052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025445.745543 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:46.195239: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025447.085498 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:47.195409: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025448.090374 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:48.195602: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:49.205055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025449.342690 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. I0000 00:00:1714025449.934585 3227685 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot]: 0/1 streams completed; 720/1000 splits assigned or completed. 2024-04-25 06:10:50.215043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:51.215220: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025451.727605 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:52.225063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025452.735667 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:53.225312: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:54.230454: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025454.605075 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:55.230685: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025455.784388 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:56.245111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025456.836544 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:10:57.255062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025457.967329 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:10:58.275064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:59.283719: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025459.687704 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:00.283913: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:01.285046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025462.026323 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:02.285752: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:03.295057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:04.305073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025464.545093 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:05.307981: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025465.775848 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:06.308162: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:07.325062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025467.654865 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:08.335058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025468.927894 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:09.345085: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:10.345290: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:11.345666: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025471.696552 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:12.355062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:13.385064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025473.865126 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:14.386969: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025474.953771 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:15.387158: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025476.374499 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:16.395148: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:17.405054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025477.496260 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:18.405247: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:19.415059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025479.799967 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:20.415244: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025480.805320 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:21.435050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025481.855032 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:22.445058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:23.455061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025484.286271 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:24.465056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:25.485190: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025486.115492 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:26.505040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:27.506167: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025488.005955 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:28.525058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025489.135038 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:29.525911: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:30.526163: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025491.465045 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:31.526336: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:32.535053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025492.628054 2996407 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a40cd7daa61d261_ldcg-aarch64-02-fe5cc584-2867224-616e59efa6f53.tfrecord*. 2024-04-25 06:11:33.545052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025493.846963 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:34.546276: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:35.551278: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:36.565073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025496.806370 2996406 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5ed32e988412d5d_ldcg-aarch64-02-c1ce8204-2867224-616e59efa2d43.tfrecord*. 2024-04-25 06:11:37.246295: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 1000, chunk size: 13.6719KB. 2024-04-25 06:11:37.246837: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_2_1000. Checkpointing distributed tf.data snapshot writer took 481us 2024-04-25 06:11:37.563627: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:11:37.563998: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:11:37.572093: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025497.596634 3316267 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiph00sl6/tmpqb8aq2n9/tf_data_snapshot 2024-04-25 06:11:37.655146: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 37207 2024-04-25 06:11:37.708319: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42657 [ OK ] SnapshotFtTest.testNonrepeatedDatasetDoesntProduceSecondRepetitionDir_test_mode_graph_tfapiversion_2_numsources_1_numworkers_1 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_2_numelements_1000_numrepetitions_10_numworkers_3 2024-04-25 06:15:14.448414: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpr0oxy4et/tf_data_dispatcher_journal 2024-04-25 06:15:14.448509: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:15:14.448782: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpr0oxy4et" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:15:14.448809: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:33321 2024-04-25 06:15:14.505702: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:15:14.516505: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:33321. Worker config: protocol: "grpc" dispatcher_address: "localhost:33321" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:15:14.516740: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:43233 2024-04-25 06:15:14.519005: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:33321. Worker config: protocol: "grpc" dispatcher_address: "localhost:33321" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:15:14.519190: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41833 2024-04-25 06:15:14.521104: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:33321. Worker config: protocol: "grpc" dispatcher_address: "localhost:33321" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:15:14.521286: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:37731 I0000 00:00:1714025714.765784 3608840 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot I0000 00:00:1714025714.957405 3608840 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot 2024-04-25 06:15:14.966177: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 43233 I0000 00:00:1714025715.018094 3608675 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, created stream_2 and assigned to localhost:37731 I0000 00:00:1714025715.035624 3609051 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, created stream_1 and assigned to localhost:43233 2024-04-25 06:15:15.065307: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:15:15.065893: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream 2, chunk 0. 2024-04-25 06:15:15.105480: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:15:15.106016: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream 1, chunk 0. I0000 00:00:1714025715.155236 3609124 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, created stream_0 and assigned to localhost:41833 2024-04-25 06:15:15.205920: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:15:15.405621: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream 0, chunk 0. 2024-04-25 06:15:15.737529: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 126ms. 2024-04-25 06:15:15.874384: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 178ms. 2024-04-25 06:15:15.896532: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 167ms. 2024-04-25 06:15:16.041747: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 122ms. 2024-04-25 06:15:16.055730: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 230ms. 2024-04-25 06:15:16.075610: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 209ms. 2024-04-25 06:15:16.175697: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 131ms. 2024-04-25 06:15:16.295983: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 260ms. 2024-04-25 06:15:16.296134: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 229ms. 2024-04-25 06:15:16.315734: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 205ms. 2024-04-25 06:15:16.526909: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 266ms. 2024-04-25 06:15:16.527255: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 312ms. 2024-04-25 06:15:16.569112: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 275ms. 2024-04-25 06:15:16.815573: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 339ms. 2024-04-25 06:15:16.845683: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: Socket closed. Will retry in 344ms. 2024-04-25 06:15:16.846221: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: Socket closed. Will retry in 418ms. 2024-04-25 06:15:16.852321: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 33321 2024-04-25 06:15:16.853107: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpr0oxy4et/tf_data_dispatcher_journal 2024-04-25 06:15:16.853298: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 68us. 2024-04-25 06:15:16.905585: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:15:16.925833: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:15:17.165676: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: Socket closed. Will retry in 350ms. I0000 00:00:1714025719.907040 3611411 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot 2024-04-25 06:15:19.907324: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 33321 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpr0oxy4et" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:15:19.907428: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:33321 2024-04-25 06:15:19.907920: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:20.915067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:21.925039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025722.335069 3609525 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__f1830a3acb897c1b_ldcg-aarch64-02-ba0968a1-2867224-616e5b5e2cb79.tfrecord*. 2024-04-25 06:15:22.345171: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:15:22.945049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:23.955053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025724.706052 3609466 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a0343d065b877ca_ldcg-aarch64-02-5e33203d-2867224-616e5b5e344cb.tfrecord*. 2024-04-25 06:15:24.899907: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:33321. Worker config: port: 43233 protocol: "grpc" dispatcher_address: "localhost:33321" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:15:24.900109: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:43233 2024-04-25 06:15:24.900598: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:15:24.900971: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream 1, chunk 0. 2024-04-25 06:15:24.965066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:25.035880: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:15:25.975053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:25.975116: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:41833 due to timeout 2024-04-25 06:15:26.985059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025727.456458 3609466 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1a0343d065b877ca_ldcg-aarch64-02-5e33203d-2867224-616e5b5e344cb.tfrecord*. 2024-04-25 06:15:27.616254: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 41833 I0000 00:00:1714025728.545036 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:15:29.857539: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 33321 2024-04-25 06:15:29.858107: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpr0oxy4et/tf_data_dispatcher_journal 2024-04-25 06:15:29.858361: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 147us. 2024-04-25 06:15:29.915821: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: Socket closed. Will retry in 110ms. 2024-04-25 06:15:29.923784: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:15:29.924335: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714025737.590190 3626992 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot 2024-04-25 06:15:37.595086: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 33321 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpr0oxy4et" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:15:37.595193: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:33321 2024-04-25 06:15:37.605042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:37.636539: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:33321. Worker config: port: 41833 protocol: "grpc" dispatcher_address: "localhost:33321" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:15:37.636740: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41833 2024-04-25 06:15:37.745091: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:15:37.989403: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream 0, chunk 0. 2024-04-25 06:15:38.655444: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:38.655520: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:37731 due to timeout I0000 00:00:1714025739.296170 3609465 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__5b02e2c90006a012_ldcg-aarch64-02-965ac45d-2867224-616e5b5e207e9.tfrecord*. 2024-04-25 06:15:39.297721: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:15:39.396364: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 37731 I0000 00:00:1714025740.334971 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:15:40.385514: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 33321 2024-04-25 06:15:40.386350: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpr0oxy4et/tf_data_dispatcher_journal 2024-04-25 06:15:40.386708: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 221us. 2024-04-25 06:15:40.445642: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:15:40.445996: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714025740.597030 3639669 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot 2024-04-25 06:15:40.597288: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 33321 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpr0oxy4et" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:15:40.597362: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:33321 2024-04-25 06:15:40.597918: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:40.661055: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:33321. Worker config: port: 37731 protocol: "grpc" dispatcher_address: "localhost:33321" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:15:40.661258: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:37731 2024-04-25 06:15:40.661964: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:15:40.662607: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream 2, chunk 0. 2024-04-25 06:15:41.605057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025742.376170 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:15:42.615090: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:43.625103: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:44.625403: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025744.765496 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:15:45.635051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:46.655047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:47.656222: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025747.741130 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:15:48.658509: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:49.675040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025749.855399 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:15:50.685315: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:51.695051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025751.847693 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:15:52.705042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025753.288600 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:15:53.715124: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025754.326421 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:15:54.725064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:55.728971: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:56.735053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025757.148729 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:15:57.745051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:58.751577: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025759.520766 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:15:59.755050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:00.765041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:01.775082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025762.226132 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:16:02.785068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025763.265452 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:16:03.785265: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:04.795046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025765.629733 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:16:05.796872: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025766.738464 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:16:06.797534: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:07.815074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025767.935108 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:16:08.815291: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:09.825151: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025770.346319 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:16:10.835039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:11.875050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025772.225719 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:16:12.885800: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:13.886069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025774.789238 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:16:14.886552: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025775.889549 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:16:15.905082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:16.925039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025777.606192 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:16:17.935041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025778.702278 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:16:18.935539: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:19.965039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025780.866720 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:16:20.975155: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:21.985068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025782.466141 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:16:23.005042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025783.560027 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:16:24.045042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:25.045243: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025785.968132 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:16:26.055044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:27.065046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:28.074743: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025788.493416 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:16:29.075118: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025789.822051 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:16:30.095052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:31.105047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025791.247313 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:16:32.105244: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025792.757195 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:16:33.115102: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:34.125048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025794.250484 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:16:35.135054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:36.155082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:37.155628: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025797.466743 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:16:38.165045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:39.175077: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025799.798818 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. I0000 00:00:1714025799.883151 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1714025799.883543 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1714025799.883846 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1714025800.096901 3684493 snapshot_manager.cc:775] Starting repetition_1 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, source 0 2024-04-25 06:16:40.185050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:40.185118: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:41833 due to timeout 2024-04-25 06:16:40.185138: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:37731 due to timeout I0000 00:00:1714025800.505557 3685914 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot]: 0/3 streams completed; 1000/10000 splits assigned or completed. 2024-04-25 06:16:41.205070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:42.225046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025803.186421 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:16:43.228345: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:44.235054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025804.934100 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:16:45.245181: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:46.255081: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025806.382203 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:16:47.265096: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025807.435469 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:16:48.275044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025808.935945 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:16:49.285047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:50.295038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025810.935584 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:16:51.305040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:52.315067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:53.325046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025813.565462 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:16:54.335055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025814.945444 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:16:55.345068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025816.215089 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:16:56.355055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025817.265345 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:16:57.365047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:58.365277: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025818.849497 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:16:59.375053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:00.375257: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025820.405039 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:17:01.395129: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:02.415055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025822.754788 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:17:03.425049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:04.435041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:05.437698: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025825.567235 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:17:06.455065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:07.465043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025827.835712 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:17:08.475041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025829.105047 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:17:09.495052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:10.505053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:11.505277: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:12.515091: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025832.825133 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:17:13.515377: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:14.525093: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025835.133691 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:17:15.545053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:16.565053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025837.331353 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:17:17.585044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:18.595045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025838.705771 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:17:19.595212: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025839.735126 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:17:20.605053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025841.426918 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:17:21.615034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:22.615206: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:23.625050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:24.635054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025844.736463 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:17:25.635223: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025845.905034 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:17:26.655055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:27.665069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:28.685033: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025849.095046 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:17:29.695058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025850.133458 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:17:30.695240: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:31.725061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025851.766652 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:17:32.735056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025853.336048 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:17:33.735920: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:34.736518: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:35.755060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:36.767055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025857.045495 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:17:37.767242: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025858.509605 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:17:38.785050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:39.795040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025860.545553 3786716 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot]: 0/3 streams completed; 1214/10000 splits assigned or completed. 2024-04-25 06:17:40.800388: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:41.801656: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025862.275798 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:17:42.805045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025863.356536 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:17:43.815047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025864.475044 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:17:44.815260: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:45.816155: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025866.269354 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:17:46.825062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025867.447612 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:17:47.835074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025868.695422 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:17:48.855067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:49.856499: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025869.867331 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:17:50.856670: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:51.875048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025872.025032 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:17:52.885045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025873.505031 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:17:53.895042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:54.905048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025875.256388 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:17:55.915051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:56.925046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:57.935061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025878.066491 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:17:58.945042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025879.215737 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:17:59.955212: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:00.965084: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:01.986542: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:02.995118: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025883.786830 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:18:03.995995: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025884.804192 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:18:04.996731: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:06.005057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:07.025068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025887.615829 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:18:08.035054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025888.776087 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:18:09.036412: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:10.049052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025890.055126 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:18:11.055069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:12.065216: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025892.445467 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:18:13.075047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:14.085048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:15.095053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:16.135053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025896.217042 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:18:17.135537: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025897.866142 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:18:18.145042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025899.025731 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:18:19.145677: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:20.155074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:21.175050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:22.185044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:23.195049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:24.205396: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025904.986513 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:18:25.215053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:26.215235: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:27.225397: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025908.076196 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:18:28.225817: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:29.235071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025909.635016 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:18:30.245053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025910.985031 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:18:31.255118: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:32.275044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025912.586071 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:18:33.275232: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:34.285117: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:35.295044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:36.305039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:37.305202: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025917.495037 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:18:38.315053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:39.322415: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:40.325055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025920.595452 3819959 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot]: 0/3 streams completed; 1394/10000 splits assigned or completed. I0000 00:00:1714025920.765785 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:18:41.345047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:42.358511: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:43.365046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:44.375036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:45.385044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:46.395137: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:47.405100: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025927.805027 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:18:48.405556: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:49.415148: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:50.425052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025930.896948 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:18:51.435066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:52.435263: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:53.445042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025933.765127 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:18:54.455089: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025934.795037 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:18:55.485040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:56.495059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025937.365041 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:18:57.515055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025938.455045 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:18:58.525014: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:59.534450: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:00.535036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:01.545040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025942.135042 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:19:02.555038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025943.175353 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:19:03.555155: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:04.556937: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:05.565306: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025946.185030 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:19:06.575136: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025947.336703 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:19:07.595044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:08.595241: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025948.988359 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:19:09.615052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025950.155106 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:19:10.625228: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:11.645045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:12.655049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025953.039015 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:19:13.675045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:14.695041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025954.727050 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:19:15.695236: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025955.947257 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:19:16.705043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025957.165775 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:19:17.705216: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:18.715045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:19.725073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:20.725238: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025960.920881 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:19:21.745072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:22.755066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025963.446047 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:19:23.775036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:24.785040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:25.835040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025965.856221 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:19:26.843341: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025967.575229 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:19:27.845040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025968.682626 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:19:28.855065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:29.865044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:30.865347: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025970.875120 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:19:31.875037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025972.346719 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:19:32.875268: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025973.755460 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:19:33.885039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:34.895051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025975.103039 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:19:35.915041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025976.317601 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:19:36.915228: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:37.925051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:38.925455: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:39.935041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025980.046020 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. I0000 00:00:1714025980.605238 3861471 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot]: 0/3 streams completed; 1613/10000 splits assigned or completed. 2024-04-25 06:19:40.940860: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025981.606122 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:19:41.945061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025982.885531 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:19:42.965060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:43.995063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:45.005062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025985.385415 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:19:46.017316: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025986.425558 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:19:47.017495: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:48.029837: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:49.035062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:50.041343: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025990.998967 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:19:51.041643: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:52.045055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:53.055097: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:54.055287: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:55.065062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:56.085052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:57.095079: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:58.106904: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025999.111019 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:19:59.115045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:00.115222: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026000.341093 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:20:01.125111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026001.985528 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:20:02.135058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026003.115460 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:20:03.145101: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:04.255064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:05.265071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026005.815720 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:20:06.285061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:07.285295: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:08.286056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026008.298852 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:20:09.295060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026009.545503 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:20:10.305098: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:11.315219: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:12.315464: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:13.325069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026013.515852 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:20:14.335111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:15.345057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026015.395470 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:20:16.355066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026017.257729 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:20:17.375055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026018.375871 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:20:18.385067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:19.395059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026019.787081 3620193 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3fb30d5500158f73_ldcg-aarch64-02-7145a7ea-2867224-616e5b677f7d3.tfrecord*. 2024-04-25 06:20:20.405056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:21.415121: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:22.435053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026022.856750 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:20:23.445061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026023.892142 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:20:24.455080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026025.003977 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:20:25.475083: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026026.045027 3620194 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3be6f1c6894a279b_ldcg-aarch64-02-ba0968a1-2867224-616e5b677f886.tfrecord*. 2024-04-25 06:20:26.055091: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 1, compression: SNAPPY }. Stream 1, chunk 0, number of elements in chunk: 600, chunk size: 8.20312KB. 2024-04-25 06:20:26.485088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:27.495058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:28.505060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:29.515424: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026030.185361 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:20:30.525760: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026031.495393 3640016 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7a0c29bef2030b34_ldcg-aarch64-02-35a8cee1-2867224-616e5b768a966.tfrecord*. 2024-04-25 06:20:31.535059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:32.555075: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026033.405039 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:20:33.605066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:34.615089: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026034.655798 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:20:35.625061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:36.626830: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026036.645852 3636207 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ce15412caf653f86_ldcg-aarch64-02-657e41e6-2867224-616e5b73fc32f.tfrecord*. 2024-04-25 06:20:37.645061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026037.766318 3640015 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__39d94bfc7958fc69_ldcg-aarch64-02-232ff5bd-2867224-616e5b768a876.tfrecord*. 2024-04-25 06:20:38.655056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:39.675061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026040.655427 3895098 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot]: 0/3 streams completed; 1864/10000 splits assigned or completed. 2024-04-25 06:20:40.685059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:41.695063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:42.705074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026042.895297 3636206 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7bedeadb989029ad_ldcg-aarch64-02-fe7bfb98-2867224-616e5b7403852.tfrecord*. 2024-04-25 06:20:42.896429: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 2, compression: SNAPPY }. Stream 2, chunk 0, number of elements in chunk: 645, chunk size: 8.81836KB. 2024-04-25 06:20:42.905708: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 612, chunk size: 8.36719KB. 2024-04-25 06:20:43.705276: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:44.725063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:45.735058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:46.225102: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_2_612. Checkpointing distributed tf.data snapshot writer took 3.319318s 2024-04-25 06:20:46.225478: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream 0, chunk 2. 2024-04-25 06:20:46.228614: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/checkpoints/checkpoint_2_645. Checkpointing distributed tf.data snapshot writer took 3.332137s 2024-04-25 06:20:46.228885: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream 2, chunk 2. I0000 00:00:1714026046.231083 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:20:46.236744: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/checkpoints/checkpoint_2_600. Checkpointing distributed tf.data snapshot writer took 20.181542s 2024-04-25 06:20:46.237039: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream 1, chunk 2. 2024-04-25 06:20:46.745061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026047.435053 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:20:47.755064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:48.765061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026049.490418 3900919 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e5bc448738fe795d_ldcg-aarch64-02-657e41e6-2867224-616e5c99f4928.tfrecord*. I0000 00:00:1714026049.491548 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1714026049.737416 3902314 snapshot_manager.cc:775] Starting repetition_2 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, source 0 2024-04-25 06:20:49.775111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:50.785064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026051.077756 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1714026051.079101 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1714026051.085128 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. 2024-04-25 06:20:51.795076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026052.167490 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. 2024-04-25 06:20:52.805107: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:53.825064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026054.118633 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:20:54.835059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026055.368787 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:20:55.835291: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026056.768983 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:20:56.845055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:57.855051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026058.170206 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. 2024-04-25 06:20:58.855295: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026059.458736 3900919 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e5bc448738fe795d_ldcg-aarch64-02-657e41e6-2867224-616e5c99f4928.tfrecord*. 2024-04-25 06:20:59.865062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026060.514179 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:21:00.875057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:01.885052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:02.895059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:03.905058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026064.359115 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:21:04.975052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:05.985067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026066.455133 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:21:06.995129: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026067.710644 3900924 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__86564dda4bcf0082_ldcg-aarch64-02-6eb3a4e8-2867224-616e5c99f6feb.tfrecord*. 2024-04-25 06:21:08.005078: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:09.025069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:10.035088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026070.807026 3900924 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__86564dda4bcf0082_ldcg-aarch64-02-6eb3a4e8-2867224-616e5c99f6feb.tfrecord*. 2024-04-25 06:21:11.035300: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026071.997918 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:21:12.045059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:13.055065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026073.370059 3900924 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__86564dda4bcf0082_ldcg-aarch64-02-6eb3a4e8-2867224-616e5c99f6feb.tfrecord*. 2024-04-25 06:21:14.065069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026074.498293 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:21:15.066755: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026075.608719 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:21:16.075071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:17.095065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026077.158610 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:21:18.115157: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026078.159200 3900919 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e5bc448738fe795d_ldcg-aarch64-02-657e41e6-2867224-616e5c99f4928.tfrecord*. 2024-04-25 06:21:19.155220: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:20.176938: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026080.650649 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:21:21.185057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:22.195070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026082.563767 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:21:23.205060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:24.215058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026084.218829 3900924 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__86564dda4bcf0082_ldcg-aarch64-02-6eb3a4e8-2867224-616e5c99f6feb.tfrecord*. 2024-04-25 06:21:25.225086: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026086.177302 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. 2024-04-25 06:21:26.235059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026086.375400 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1714026086.375776 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1714026086.376075 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1714026086.398543 3946219 snapshot_manager.cc:775] Starting repetition_3 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, source 0 I0000 00:00:1714026087.179227 3900919 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e5bc448738fe795d_ldcg-aarch64-02-657e41e6-2867224-616e5c99f4928.tfrecord*. 2024-04-25 06:21:27.236967: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026087.955277 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1714026087.955444 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1714026087.955519 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1714026087.978362 3947775 snapshot_manager.cc:775] Starting repetition_4 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, source 0 2024-04-25 06:21:28.245057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:29.255262: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:30.265273: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:31.275066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:32.275283: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026092.769076 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:21:33.285068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:34.305090: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:35.306421: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:36.325064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026096.996804 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:21:37.325480: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:38.345077: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026098.595532 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:21:39.365059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:40.375058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026100.695674 3957917 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot]: 0/3 streams completed; 4185/10000 splits assigned or completed. I0000 00:00:1714026100.831230 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:21:41.375261: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:42.415072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026103.048760 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:21:43.435048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:44.455102: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:45.455297: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026105.682733 3900924 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__86564dda4bcf0082_ldcg-aarch64-02-6eb3a4e8-2867224-616e5c99f6feb.tfrecord*. 2024-04-25 06:21:46.465060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026107.421175 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:21:47.485061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:48.488563: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:49.495066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:50.495253: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026110.568864 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:21:51.515057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:52.525062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026113.115569 3900919 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e5bc448738fe795d_ldcg-aarch64-02-657e41e6-2867224-616e5c99f4928.tfrecord*. 2024-04-25 06:21:53.535065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:54.545057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:55.555099: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026116.325122 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:21:56.585281: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:57.605767: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026118.255846 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:21:58.615065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:59.625064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026119.638713 3900919 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e5bc448738fe795d_ldcg-aarch64-02-657e41e6-2867224-616e5c99f4928.tfrecord*. 2024-04-25 06:22:00.639403: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:01.645058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026122.085622 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:22:02.655061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:03.665060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026123.737664 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:22:04.675055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:05.695061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026126.506950 3900924 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__86564dda4bcf0082_ldcg-aarch64-02-6eb3a4e8-2867224-616e5c99f6feb.tfrecord*. 2024-04-25 06:22:06.715061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026127.610777 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. 2024-04-25 06:22:07.735055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:08.737938: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026128.955961 3900924 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__86564dda4bcf0082_ldcg-aarch64-02-6eb3a4e8-2867224-616e5c99f6feb.tfrecord*. 2024-04-25 06:22:09.745062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026130.035209 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:22:10.755072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:11.765099: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:12.785057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026133.067197 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. 2024-04-25 06:22:13.795053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026134.670044 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:22:14.795238: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:15.805066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026135.808196 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:22:16.815083: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026137.057806 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:22:17.825477: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:18.835061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:19.845056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026140.697043 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:22:20.855063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:21.865070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:22.885078: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026143.126154 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. 2024-04-25 06:22:23.985119: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:25.035061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026145.146766 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:22:26.045063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026146.745881 3900919 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e5bc448738fe795d_ldcg-aarch64-02-657e41e6-2867224-616e5c99f4928.tfrecord*. 2024-04-25 06:22:27.045266: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:28.055056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026148.622530 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. I0000 00:00:1714026148.753107 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1714026148.753476 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1714026148.753768 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1714026148.757917 3999893 snapshot_manager.cc:775] Starting repetition_5 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, source 0 2024-04-25 06:22:29.055232: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:30.065053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026150.147364 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:22:31.075058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:32.085058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:33.095058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026153.398496 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. 2024-04-25 06:22:34.105080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026154.927706 3900919 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e5bc448738fe795d_ldcg-aarch64-02-657e41e6-2867224-616e5c99f4928.tfrecord*. 2024-04-25 06:22:35.105324: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:36.105519: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026156.600211 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:22:37.109351: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:38.125057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026159.045996 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. 2024-04-25 06:22:39.135064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026160.056728 3900922 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__c4fd64d4fbc84e46_ldcg-aarch64-02-ba0968a1-2867224-616e5c99f116f.tfrecord*. 2024-04-25 06:22:40.145087: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026160.923499 4012590 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot]: 0/3 streams completed; 5714/10000 splits assigned or completed. I0000 00:00:1714026161.058174 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:22:41.145286: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026161.356810 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1714026161.357202 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1714026161.357503 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1714026161.361687 4013747 snapshot_manager.cc:775] Starting repetition_6 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, source 0 2024-04-25 06:22:42.155049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026162.422889 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b5ed437432915179_ldcg-aarch64-02-fe7bfb98-2867224-616e5c99f21b7.tfrecord*. 2024-04-25 06:22:43.165057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026163.479748 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:22:44.172862: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026164.471968 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. I0000 00:00:1714026164.485557 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. I0000 00:00:1714026164.486085 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. I0000 00:00:1714026164.490278 4018784 snapshot_manager.cc:775] Starting repetition_7 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, source 0 I0000 00:00:1714026164.633527 3900923 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f469ce3ae75a5233_ldcg-aarch64-02-210a48b4-2867224-616e5c99f48e4.tfrecord*. 2024-04-25 06:22:45.185095: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026165.634020 3900919 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e5bc448738fe795d_ldcg-aarch64-02-657e41e6-2867224-616e5c99f4928.tfrecord*. I0000 00:00:1714026165.937877 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1714026165.938254 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1714026165.938552 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1714026165.958678 4021219 snapshot_manager.cc:775] Starting repetition_8 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, source 0 2024-04-25 06:22:46.195065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026166.886474 3900924 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__86564dda4bcf0082_ldcg-aarch64-02-6eb3a4e8-2867224-616e5c99f6feb.tfrecord*. 2024-04-25 06:22:47.195241: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026167.886666 3900918 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__179a3cde929d7b5e_ldcg-aarch64-02-5cc4754f-2867224-616e5c99f48d8.tfrecord*. I0000 00:00:1714026168.045843 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. I0000 00:00:1714026168.059618 4028140 snapshot_manager.cc:775] Starting repetition_9 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, source 0 I0000 00:00:1714026168.066255 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. I0000 00:00:1714026168.066545 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. 2024-04-25 06:22:48.204770: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026168.889148 3900921 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e025ebe5aea400a3_ldcg-aarch64-02-fe7bfb98-2867224-616e5d0de01fd.tfrecord*. 2024-04-25 06:22:49.204948: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026169.889597 3900924 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__5af7e0a7cad9b94f_ldcg-aarch64-02-6eb3a4e8-2867224-616e5d0d81260.tfrecord*. I0000 00:00:1714026170.070519 3640017 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. 2024-04-25 06:22:50.071346: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 2, compression: SNAPPY }. Stream 2, chunk 2, number of elements in chunk: 2803, chunk size: 38.3223KB. 2024-04-25 06:22:50.071792: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/checkpoints/checkpoint_6_2803. Checkpointing distributed tf.data snapshot writer took 400us 2024-04-25 06:22:50.072555: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_2/checkpoints 2024-04-25 06:22:50.072834: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 2, compression: SNAPPY } I0000 00:00:1714026170.076052 3636209 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. 2024-04-25 06:22:50.076785: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 2, number of elements in chunk: 2566, chunk size: 35.082KB. 2024-04-25 06:22:50.077215: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_6_2566. Checkpointing distributed tf.data snapshot writer took 398us 2024-04-25 06:22:50.078068: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:22:50.078376: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 0, compression: SNAPPY } I0000 00:00:1714026170.080337 3620196 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. 2024-04-25 06:22:50.081231: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 1, compression: SNAPPY }. Stream 1, chunk 2, number of elements in chunk: 2774, chunk size: 37.9258KB. 2024-04-25 06:22:50.081688: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/checkpoints/checkpoint_6_2774. Checkpointing distributed tf.data snapshot writer took 413us 2024-04-25 06:22:50.082559: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot/streams/stream_1/checkpoints 2024-04-25 06:22:50.082836: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:22:50.215078: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026170.307144 4032712 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpfcll8voi/tmpnr1gfxoc/tf_data_snapshot 2024-04-25 06:22:51.225058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:52.235145: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:53.255062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:54.275059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:55.285243: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:56.325058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:57.175573: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence 2024-04-25 06:22:57.176369: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence 2024-04-25 06:22:57.325295: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:57.465269: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 37731 2024-04-25 06:22:57.466598: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 41833 2024-04-25 06:22:57.467451: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 43233 2024-04-25 06:22:57.506558: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 33321 [ OK ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_2_numelements_1000_numrepetitions_10_numworkers_3 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numelements_1_numrepetitions_1_numworkers_1 [ SKIPPED ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numelements_1_numrepetitions_1_numworkers_1 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_2_numelements_2_numrepetitions_1_numworkers_3 2024-04-25 06:23:04.930364: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpkpu36go1/tf_data_dispatcher_journal 2024-04-25 06:23:04.930446: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:23:04.930684: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpkpu36go1" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:23:04.930709: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42041 2024-04-25 06:23:04.945063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:23:05.017467: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42041. Worker config: protocol: "grpc" dispatcher_address: "localhost:42041" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:23:05.017708: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:35493 2024-04-25 06:23:05.019912: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42041. Worker config: protocol: "grpc" dispatcher_address: "localhost:42041" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:23:05.020105: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:37121 2024-04-25 06:23:05.021994: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42041. Worker config: protocol: "grpc" dispatcher_address: "localhost:42041" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:23:05.022169: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41291 I0000 00:00:1714026185.046795 4062821 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot I0000 00:00:1714026185.297524 4062821 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot 2024-04-25 06:23:05.299580: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 35493 I0000 00:00:1714026185.315492 4062994 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, created stream_0 and assigned to localhost:41291 I0000 00:00:1714026185.316497 4063142 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, created stream_2 and assigned to localhost:35493 I0000 00:00:1714026185.319657 4063137 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, created stream_1 and assigned to localhost:37121 2024-04-25 06:23:05.356295: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:23:05.356814: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream 1, chunk 0. 2024-04-25 06:23:05.362337: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:23:05.362813: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream 0, chunk 0. 2024-04-25 06:23:05.385600: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:23:05.435452: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42041 2024-04-25 06:23:05.436300: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpkpu36go1/tf_data_dispatcher_journal 2024-04-25 06:23:05.436541: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 79us. 2024-04-25 06:23:05.453045: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:23:05.456555: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:23:05.525550: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream 2, chunk 0. I0000 00:00:1714026185.747144 4063990 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot 2024-04-25 06:23:05.747690: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 42041 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpkpu36go1" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:23:05.747768: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42041 2024-04-25 06:23:05.747887: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:05.925227: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 146ms. 2024-04-25 06:23:05.925365: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 105ms. 2024-04-25 06:23:06.035232: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 165ms. 2024-04-25 06:23:06.075175: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 175ms. I0000 00:00:1714026186.096223 4063987 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__37f7601f7ad27503_ldcg-aarch64-02-58e6d4ce-2867224-616e5d1eccbd6.tfrecord*. 2024-04-25 06:23:06.205033: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:23:06.215251: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 237ms. 2024-04-25 06:23:06.255228: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 207ms. 2024-04-25 06:23:06.455160: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 257ms. 2024-04-25 06:23:06.475737: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 259ms. 2024-04-25 06:23:06.481154: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42041. Worker config: port: 35493 protocol: "grpc" dispatcher_address: "localhost:42041" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:23:06.481341: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:35493 2024-04-25 06:23:06.543602: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:23:06.544095: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream 2, chunk 0. 2024-04-25 06:23:06.576953: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 2, compression: SNAPPY }. Stream 2, chunk 0, number of elements in chunk: 2, chunk size: 28B. 2024-04-25 06:23:06.577536: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot/streams/stream_2/checkpoints/checkpoint_2_2. Checkpointing distributed tf.data snapshot writer took 522us 2024-04-25 06:23:06.577896: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot/streams/stream_2/checkpoints 2024-04-25 06:23:06.578162: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:23:06.726529: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:23:06.765115: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:06.765179: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:37121 due to timeout 2024-04-25 06:23:06.775817: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 0, chunk size: 0B. 2024-04-25 06:23:06.776411: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_0_0. Checkpointing distributed tf.data snapshot writer took 528us 2024-04-25 06:23:06.776713: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:23:06.776975: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:23:06.978095: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 37121 2024-04-25 06:23:06.996535: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: CANCELLED: Failed to perform worker heartbeat: Cancelled 2024-04-25 06:23:07.015430: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42041 2024-04-25 06:23:07.016251: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpkpu36go1/tf_data_dispatcher_journal 2024-04-25 06:23:07.016554: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 199us. 2024-04-25 06:23:07.105651: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714026187.326704 4066916 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot 2024-04-25 06:23:07.326980: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 42041 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpkpu36go1" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:23:07.327051: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42041 2024-04-25 06:23:07.327164: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:07.350294: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42041. Worker config: port: 37121 protocol: "grpc" dispatcher_address: "localhost:42041" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:23:07.350516: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:37121 2024-04-25 06:23:07.366504: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 41291 2024-04-25 06:23:07.430168: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:23:07.430479: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream 1, chunk 0. 2024-04-25 06:23:07.446371: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42041 2024-04-25 06:23:07.447113: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpkpu36go1/tf_data_dispatcher_journal 2024-04-25 06:23:07.447447: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 205us. 2024-04-25 06:23:07.485755: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:23:07.540360: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714026187.577475 4068048 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot 2024-04-25 06:23:07.577744: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 42041 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpkpu36go1" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:23:07.577814: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42041 2024-04-25 06:23:07.595047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:07.596404: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 1, compression: SNAPPY }. Stream 1, chunk 0, number of elements in chunk: 0, chunk size: 0B. 2024-04-25 06:23:07.596964: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot/streams/stream_1/checkpoints/checkpoint_0_0. Checkpointing distributed tf.data snapshot writer took 526us 2024-04-25 06:23:07.597269: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot/streams/stream_1/checkpoints 2024-04-25 06:23:07.597613: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:23:07.616471: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42041. Worker config: port: 41291 protocol: "grpc" dispatcher_address: "localhost:42041" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:23:07.616685: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41291 I0000 00:00:1714026187.697736 4068046 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpiepn21qw/tmpwsaw_ekr/tf_data_snapshot 2024-04-25 06:23:08.408853: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence [[{{node IteratorGetNext}}]] 2024-04-25 06:23:08.410326: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 41291 2024-04-25 06:23:08.411264: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 37121 2024-04-25 06:23:08.411980: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 35493 2024-04-25 06:23:08.447966: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42041 [ OK ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_2_numelements_2_numrepetitions_1_numworkers_3 [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSplitNames_test_mode_eager_tfapiversion_1_badsplitfilename_split [ SKIPPED ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSplitNames_test_mode_eager_tfapiversion_1_badsplitfilename_split [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSplitNames_test_mode_graph_tfapiversion_2_badsplitfilename_split0x 2024-04-25 06:23:08.988712: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpu4b8q0jy/tf_data_dispatcher_journal 2024-04-25 06:23:08.988788: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:23:08.989061: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpu4b8q0jy" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:23:08.989085: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45333 2024-04-25 06:23:09.005048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive I0000 00:00:1714026189.046877 4071462 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpil0laoc3/tmp_q0ex4cw/tf_data_snapshot I0000 00:00:1714026189.158017 4071462 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpil0laoc3/tmp_q0ex4cw/tf_data_snapshot 2024-04-25 06:23:09.245989: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45333 2024-04-25 06:23:09.246753: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/efbab71aa75682c5a19f95e912086183kl0p8ioj/tmpu4b8q0jy/tf_data_dispatcher_journal 2024-04-25 06:23:09.246956: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 47us. -- Test timed out at 2024-04-25 06:23:09 UTC -- Current thread 0x0000ffffa8547420 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/service/server_lib.py", line 207 in __init__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/test_base.py", line 275 in restart_dispatcher File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 195 in testSnapshotRecoveryFailsWithBadSplitNames File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343 in execute_test_method File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360 in decorated File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/parameterized.py", line 314 in bound_param_test File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 579 in _callTestMethod File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 623 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 678 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/runner.py", line 217 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 274 in runTests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 102 in __init__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2537 in _run_and_get_tests_result File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2568 in run_tests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2156 in _run_in_app File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2049 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 51 in g_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/app.py", line 258 in _run_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/app.py", line 312 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 60 in main_wrapper File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/benchmark.py", line 489 in benchmarks_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 62 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/test.py", line 53 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 534 in ================================================================================ ==================== Test output for //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_ft_test (shard 13 of 17): 2024-04-25 06:07:43.338889: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.6: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/python_aarch64-unknown-linux-gnu/bin/python3 [ RUN ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_eager_tfapiversion_2_numsources_1_numworkers_1 2024-04-25 06:07:47.007834: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgzzcj6k6/tf_data_dispatcher_journal 2024-04-25 06:07:47.007916: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:47.008709: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgzzcj6k6" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:47.008739: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45107 2024-04-25 06:07:47.016095: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:47.038197: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45107. Worker config: protocol: "grpc" dispatcher_address: "localhost:45107" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:47.038418: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:40139 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1714025267.226887 2751718 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot I0000 00:00:1714025267.357448 2751718 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot I0000 00:00:1714025267.358584 2751625 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot, created stream_0 and assigned to localhost:40139 2024-04-25 06:07:47.359519: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 40139 2024-04-25 06:07:47.383578: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45107 2024-04-25 06:07:47.384014: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:47.384364: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgzzcj6k6/tf_data_dispatcher_journal 2024-04-25 06:07:47.384609: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot, stream 0, chunk 0. 2024-04-25 06:07:47.384633: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 76us. I0000 00:00:1714025267.557721 2753715 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot 2024-04-25 06:07:47.557968: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 45107 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgzzcj6k6" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:47.558057: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45107 2024-04-25 06:07:47.558180: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025267.585440 2753712 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__259ac209b81ea452_ldcg-aarch64-02-2421eb27-2722052-616e59b32d261.tfrecord*. 2024-04-25 06:07:47.586134: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:07:47.657887: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:47.658411: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot, stream 0, chunk 0. 2024-04-25 06:07:47.658778: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:45107. Worker config: port: 40139 protocol: "grpc" dispatcher_address: "localhost:45107" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:47.658953: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:40139 2024-04-25 06:07:48.559309: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025268.586089 2755682 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__cd52e723aef6e835_ldcg-aarch64-02-98d9df9e-2722052-616e59b36ff76.tfrecord*. 2024-04-25 06:07:49.003219: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 1000, chunk size: 13.6719KB. 2024-04-25 06:07:49.003829: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_2_1000. Checkpointing distributed tf.data snapshot writer took 551us 2024-04-25 06:07:49.004241: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:07:49.004570: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot, stream: 0, compression: SNAPPY } I0000 00:00:1714025269.106391 2761860 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpry5h6q77/tmpyofjkj33/tf_data_snapshot 2024-04-25 06:07:49.575036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:50.429510: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence 2024-04-25 06:07:50.430080: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence 2024-04-25 06:07:50.584740: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:50.619101: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 40139 2024-04-25 06:07:50.645134: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45107 [ OK ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_eager_tfapiversion_2_numsources_1_numworkers_1 [ RUN ] SnapshotFtTest.testMultipleDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numworkers_3 [ SKIPPED ] SnapshotFtTest.testMultipleDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numworkers_3 [ RUN ] SnapshotFtTest.testNonrepeatedDatasetDoesntProduceSecondRepetitionDir_test_mode_eager_tfapiversion_2_numsources_3_numworkers_1 2024-04-25 06:07:50.882158: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpzpp1571v/tf_data_dispatcher_journal 2024-04-25 06:07:50.882256: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:50.882603: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpzpp1571v" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:50.882640: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38859 2024-04-25 06:07:50.882654: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:50.884697: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38859. Worker config: protocol: "grpc" dispatcher_address: "localhost:38859" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:50.884897: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:37341 I0000 00:00:1714025270.898138 2771171 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot I0000 00:00:1714025270.979543 2771171 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot 2024-04-25 06:07:51.005534: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 37341 2024-04-25 06:07:51.705253: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 38859 2024-04-25 06:07:51.706185: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpzpp1571v/tf_data_dispatcher_journal 2024-04-25 06:07:51.706397: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 71us. I0000 00:00:1714025271.784903 2774848 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot 2024-04-25 06:07:51.785182: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 38859 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpzpp1571v" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:51.785287: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38859 2024-04-25 06:07:51.785418: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025271.815701 2774839 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot, created stream_0 and assigned to localhost:37341 2024-04-25 06:07:51.871113: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:51.871182: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38859. Worker config: port: 37341 protocol: "grpc" dispatcher_address: "localhost:38859" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:51.871424: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:37341 2024-04-25 06:07:51.871784: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot, stream 0, chunk 0. I0000 00:00:1714025271.873910 2775904 parallel_tfrecord_writer.cc:167] Writing TFRecord of 42B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fa13f80e04f621ac_ldcg-aarch64-02-fd8ad57c-2722052-616e59b774a1b.tfrecord*. 2024-04-25 06:07:52.795043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025272.875150 2775904 parallel_tfrecord_writer.cc:167] Writing TFRecord of 42B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fa13f80e04f621ac_ldcg-aarch64-02-fd8ad57c-2722052-616e59b774a1b.tfrecord*. 2024-04-25 06:07:53.795575: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025273.909720 2775904 parallel_tfrecord_writer.cc:167] Writing TFRecord of 42B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fa13f80e04f621ac_ldcg-aarch64-02-fd8ad57c-2722052-616e59b774a1b.tfrecord*. 2024-04-25 06:07:54.649435: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 1000, chunk size: 41.0156KB. 2024-04-25 06:07:54.649922: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_4_1000. Checkpointing distributed tf.data snapshot writer took 431us 2024-04-25 06:07:54.650509: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:07:54.650851: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot, stream: 0, compression: SNAPPY } I0000 00:00:1714025274.725660 2788849 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpsvrush68/tmpn4_mdm1c/tf_data_snapshot 2024-04-25 06:07:54.775933: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 37341 2024-04-25 06:07:54.787473: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 38859 [ OK ] SnapshotFtTest.testNonrepeatedDatasetDoesntProduceSecondRepetitionDir_test_mode_eager_tfapiversion_2_numsources_3_numworkers_1 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_1_numelements_1_numrepetitions_1_numworkers_3 [ SKIPPED ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_1_numelements_1_numrepetitions_1_numworkers_3 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numelements_1000_numrepetitions_10_numworkers_1 [ SKIPPED ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numelements_1000_numrepetitions_10_numworkers_1 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_2_numelements_1_numrepetitions_10_numworkers_3 2024-04-25 06:07:55.049184: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpverzw_xx/tf_data_dispatcher_journal 2024-04-25 06:07:55.049264: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:55.049582: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpverzw_xx" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:55.049625: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:41091 2024-04-25 06:07:55.049640: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:55.051600: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41091. Worker config: protocol: "grpc" dispatcher_address: "localhost:41091" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:55.051790: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:40815 2024-04-25 06:07:55.056815: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41091. Worker config: protocol: "grpc" dispatcher_address: "localhost:41091" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:55.057022: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:39081 2024-04-25 06:07:55.058792: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41091. Worker config: protocol: "grpc" dispatcher_address: "localhost:41091" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:55.058967: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:34661 WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. W0425 06:07:55.070394 281473350857760 deprecation.py:50] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. 2024-04-25 06:07:55.076183: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled I0000 00:00:1714025275.089858 2791811 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot I0000 00:00:1714025275.107984 2791811 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot 2024-04-25 06:07:55.125426: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 40815 2024-04-25 06:07:55.128641: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41091 2024-04-25 06:07:55.129330: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpverzw_xx/tf_data_dispatcher_journal 2024-04-25 06:07:55.129538: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 76us. 2024-04-25 06:07:55.158727: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:07:55.163370: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714025275.447154 2792365 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot 2024-04-25 06:07:55.448213: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 41091 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpverzw_xx" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:55.448305: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:41091 2024-04-25 06:07:55.448352: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025275.449037 2792352 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, created stream_0 and assigned to localhost:34661 I0000 00:00:1714025275.452681 2793172 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, created stream_1 and assigned to localhost:39081 2024-04-25 06:07:55.525156: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:07:55.525683: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream 1, chunk 0. 2024-04-25 06:07:55.527366: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:55.527887: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream 0, chunk 0. I0000 00:00:1714025275.547972 2794863 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, created stream_2 and assigned to localhost:40815 I0000 00:00:1714025275.577395 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1714025275.578104 2794825 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4de81f980180f93e_ldcg-aarch64-02-1b5faee-2722052-616e59baf0b2d.tfrecord*. I0000 00:00:1714025275.585026 2794863 snapshot_manager.cc:775] Starting repetition_1 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, source 0 I0000 00:00:1714025275.586432 2794829 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1714025275.593484 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1714025275.593757 2794829 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. 2024-04-25 06:07:55.596929: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:07:55.596983: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41091. Worker config: port: 40815 protocol: "grpc" dispatcher_address: "localhost:41091" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:55.597149: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:40815 2024-04-25 06:07:55.597406: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream 2, chunk 0. I0000 00:00:1714025275.629035 2794863 snapshot_manager.cc:775] Starting repetition_2 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, source 0 I0000 00:00:1714025275.631306 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1714025275.635124 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1714025275.657346 2794863 snapshot_manager.cc:775] Starting repetition_3 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, source 0 I0000 00:00:1714025275.682349 2794829 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1714025275.682673 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1714025275.683108 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1714025275.683400 2794829 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1714025275.683673 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. 2024-04-25 06:07:55.719610: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:07:55.720111: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream 1, chunk 0. I0000 00:00:1714025275.722639 2795179 snapshot_manager.cc:775] Starting repetition_4 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, source 0 I0000 00:00:1714025275.726539 2794829 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1714025275.727065 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1714025275.727576 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1714025275.727749 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1714025275.797273 2795179 snapshot_manager.cc:775] Starting repetition_5 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, source 0 I0000 00:00:1714025275.939887 2794829 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. 2024-04-25 06:07:55.939961: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. I0000 00:00:1714025275.957324 2796366 snapshot_manager.cc:775] Starting repetition_6 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, source 0 I0000 00:00:1714025276.004607 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1714025276.005666 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1714025276.026074 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1714025276.026207 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1714025276.026245 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1714025276.026275 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1714025276.026302 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1714025276.026413 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1714025276.058058 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. I0000 00:00:1714025276.073367 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. I0000 00:00:1714025276.075244 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. 2024-04-25 06:07:56.076112: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 39081 I0000 00:00:1714025276.135325 2796618 snapshot_manager.cc:775] Starting repetition_7 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, source 0 I0000 00:00:1714025276.148898 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1714025276.148919 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1714025276.148973 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1714025276.182091 2796618 snapshot_manager.cc:775] Starting repetition_8 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, source 0 2024-04-25 06:07:56.257599: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41091 2024-04-25 06:07:56.258357: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpverzw_xx/tf_data_dispatcher_journal 2024-04-25 06:07:56.258653: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 167us. 2024-04-25 06:07:56.296317: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:07:56.300768: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714025276.379956 2799646 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot 2024-04-25 06:07:56.380211: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 41091 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpverzw_xx" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:56.380288: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:41091 2024-04-25 06:07:56.380304: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025276.380590 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. I0000 00:00:1714025276.380666 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. I0000 00:00:1714025276.381019 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. I0000 00:00:1714025276.400798 2799980 snapshot_manager.cc:775] Starting repetition_9 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, source 0 I0000 00:00:1714025276.438141 2795651 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. I0000 00:00:1714025276.438429 2795087 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 2 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. I0000 00:00:1714025276.445161 2794832 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. 2024-04-25 06:07:56.445725: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 6, chunk size: 84B. 2024-04-25 06:07:56.446222: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_2_6. Checkpointing distributed tf.data snapshot writer took 449us 2024-04-25 06:07:56.446651: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:07:56.446992: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:56.450273: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:07:56.480516: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 2, compression: SNAPPY }. Stream 2, chunk 0, number of elements in chunk: 1, chunk size: 14B. 2024-04-25 06:07:56.481105: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot/streams/stream_2/checkpoints/checkpoint_1_1. Checkpointing distributed tf.data snapshot writer took 524us 2024-04-25 06:07:56.481514: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot/streams/stream_2/checkpoints 2024-04-25 06:07:56.481891: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:07:56.549341: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:07:56.549393: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41091. Worker config: port: 39081 protocol: "grpc" dispatcher_address: "localhost:41091" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:56.549607: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:39081 2024-04-25 06:07:56.549922: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream 1, chunk 0. 2024-04-25 06:07:56.554442: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 34661 2024-04-25 06:07:56.569669: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41091 2024-04-25 06:07:56.570541: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpverzw_xx/tf_data_dispatcher_journal 2024-04-25 06:07:56.570944: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 250us. 2024-04-25 06:07:56.625488: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:07:56.676812: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714025276.690650 2801596 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot 2024-04-25 06:07:56.690920: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 41091 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpverzw_xx" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:56.691008: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:41091 2024-04-25 06:07:56.695040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025276.696820 2801518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7449a3ee368d5231_ldcg-aarch64-02-b0d957d-2722052-616e59bbee63c.tfrecord*. I0000 00:00:1714025276.696946 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1714025276.697048 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1714025276.697082 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. I0000 00:00:1714025276.697145 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1714025276.697176 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1714025276.697286 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1714025276.702977 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. 2024-04-25 06:07:56.703447: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:41091. Worker config: port: 34661 protocol: "grpc" dispatcher_address: "localhost:41091" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:56.703649: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:34661 I0000 00:00:1714025276.704093 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1714025276.704400 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. I0000 00:00:1714025276.704696 2801521 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot" stream_index: 1 num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. 2024-04-25 06:07:56.705364: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 1, compression: SNAPPY }. Stream 1, chunk 0, number of elements in chunk: 3, chunk size: 42B. 2024-04-25 06:07:56.705837: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot/streams/stream_1/checkpoints/checkpoint_2_3. Checkpointing distributed tf.data snapshot writer took 435us 2024-04-25 06:07:56.706243: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot/streams/stream_1/checkpoints 2024-04-25 06:07:56.706585: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot, stream: 1, compression: SNAPPY } I0000 00:00:1714025276.778514 2801698 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp4elv80ya/tmpjb3ecoby/tf_data_snapshot 2024-04-25 06:07:56.949033: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence [[{{node IteratorGetNext}}]] 2024-04-25 06:07:56.965309: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 34661 2024-04-25 06:07:56.976813: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 39081 2024-04-25 06:07:56.977852: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 40815 2024-04-25 06:07:56.996688: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 41091 [ OK ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_2_numelements_1_numrepetitions_10_numworkers_3 [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSourceName_test_mode_graph_tfapiversion_1_badsourcedirname_source [ SKIPPED ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSourceName_test_mode_graph_tfapiversion_1_badsourcedirname_source [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSplitNames_test_mode_graph_tfapiversion_1_badsplitfilename_split01 [ SKIPPED ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSplitNames_test_mode_graph_tfapiversion_1_badsplitfilename_split01 [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadStreamName_test_mode_graph_tfapiversion_1_badstreamdirname_streamx [ SKIPPED ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadStreamName_test_mode_graph_tfapiversion_1_badstreamdirname_streamx [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithOutOfBoundsSourceName_test_mode_eager_tfapiversion_2 2024-04-25 06:07:57.058685: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpmdkzo4eu/tf_data_dispatcher_journal 2024-04-25 06:07:57.058773: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:57.059092: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpmdkzo4eu" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:57.059117: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42825 2024-04-25 06:07:57.073632: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive I0000 00:00:1714025277.076878 2805343 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpn_89_24v/tmpx4kmuaxr/tf_data_snapshot I0000 00:00:1714025277.158753 2805343 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpn_89_24v/tmpx4kmuaxr/tf_data_snapshot 2024-04-25 06:07:57.196674: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42825 2024-04-25 06:07:57.197435: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpmdkzo4eu/tf_data_dispatcher_journal 2024-04-25 06:07:57.197644: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 43us. 2024-04-25 06:07:57.223393: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:57.223820: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42825 [ OK ] SnapshotFtTest.testSnapshotRecoveryFailsWithOutOfBoundsSourceName_test_mode_eager_tfapiversion_2 [ RUN ] SnapshotFtTest.testWorkersDontExceedMaxStreamAssignments_test_mode_eager_tfapiversion_2_workermaxconcurrentsnapshots_1 2024-04-25 06:07:57.228424: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp_i6_halk/tf_data_dispatcher_journal 2024-04-25 06:07:57.228500: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:57.228796: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp_i6_halk" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 1 2024-04-25 06:07:57.228832: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42085 2024-04-25 06:07:57.228847: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:57.231035: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42085. Worker config: protocol: "grpc" dispatcher_address: "localhost:42085" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:57.231222: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:38661 2024-04-25 06:07:57.233350: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:42085. Worker config: protocol: "grpc" dispatcher_address: "localhost:42085" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:57.233525: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:40851 I0000 00:00:1714025277.239528 2807076 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0 I0000 00:00:1714025277.599584 2807076 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0 I0000 00:00:1714025277.606091 2807064 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, created stream_0 and assigned to localhost:38661 I0000 00:00:1714025277.606872 2808059 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, created stream_1 and assigned to localhost:40851 I0000 00:00:1714025277.608851 2807076 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1 2024-04-25 06:07:57.632160: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, stream: 1, compression: SNAPPY } 2024-04-25 06:07:57.636430: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, stream: 0, compression: SNAPPY } 2024-04-25 06:07:57.636761: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, stream 1, chunk 0. 2024-04-25 06:07:57.637105: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, stream 0, chunk 0. I0000 00:00:1714025277.695412 2807076 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1 I0000 00:00:1714025277.716296 2807076 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_2 I0000 00:00:1714025277.765232 2809600 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1c69f22fe96a1056_ldcg-aarch64-02-7362bc75-2722052-616e59bcf75dd.tfrecord*. I0000 00:00:1714025277.807556 2807076 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_2 I0000 00:00:1714025277.875695 2807076 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_3 I0000 00:00:1714025277.935045 2807076 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_3 I0000 00:00:1714025277.947176 2807076 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4 I0000 00:00:1714025278.093725 2807076 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4 I0000 00:00:1714025278.129675 2813082 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_5 I0000 00:00:1714025278.201219 2813082 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_5 I0000 00:00:1714025278.208516 2813082 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_6 I0000 00:00:1714025278.287889 2813082 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_6 2024-04-25 06:07:58.325040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025278.329783 2814443 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7 I0000 00:00:1714025278.361401 2814443 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7 I0000 00:00:1714025278.371991 2814443 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8 I0000 00:00:1714025278.427815 2814443 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8 I0000 00:00:1714025278.441152 2814443 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_9 I0000 00:00:1714025278.537421 2814443 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_9 I0000 00:00:1714025278.978329 2809677 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__5e8ba8b7837e8005_ldcg-aarch64-02-5a32428-2722052-616e59bcfb003.tfrecord*. 2024-04-25 06:07:59.068377: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 42085 2024-04-25 06:07:59.069055: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp_i6_halk/tf_data_dispatcher_journal 2024-04-25 06:07:59.069290: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 101us. 2024-04-25 06:07:59.087437: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:07:59.156504: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714025279.337630 2818120 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4 I0000 00:00:1714025279.356948 2818117 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_5 I0000 00:00:1714025279.373009 2818130 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_3 I0000 00:00:1714025279.381035 2818114 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_9 I0000 00:00:1714025279.382779 2818126 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8 I0000 00:00:1714025279.396042 2818124 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_6 I0000 00:00:1714025279.398520 2818111 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_2 I0000 00:00:1714025279.404171 2818128 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0 I0000 00:00:1714025279.441689 2818122 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1 I0000 00:00:1714025281.131281 2818110 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7 2024-04-25 06:08:01.133364: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 42085 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmp_i6_halk" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 1 2024-04-25 06:08:01.133459: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:42085 2024-04-25 06:08:01.133483: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025281.167695 2809677 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__5e8ba8b7837e8005_ldcg-aarch64-02-5a32428-2722052-616e59bcfb003.tfrecord*. 2024-04-25 06:08:02.135049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025282.167846 2809600 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1c69f22fe96a1056_ldcg-aarch64-02-7362bc75-2722052-616e59bcf75dd.tfrecord*. 2024-04-25 06:08:03.145026: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025283.175243 2809600 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1c69f22fe96a1056_ldcg-aarch64-02-7362bc75-2722052-616e59bcf75dd.tfrecord*. 2024-04-25 06:08:04.145184: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025284.184503 2809603 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bda52fb61f1d211e_ldcg-aarch64-02-ce5a7c43-2722052-616e59bcf4390.tfrecord*. 2024-04-25 06:08:05.147683: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025285.185508 2809600 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1c69f22fe96a1056_ldcg-aarch64-02-7362bc75-2722052-616e59bcf75dd.tfrecord*. 2024-04-25 06:08:06.165041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025286.185739 2809677 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__5e8ba8b7837e8005_ldcg-aarch64-02-5a32428-2722052-616e59bcfb003.tfrecord*. 2024-04-25 06:08:07.176091: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025287.196797 2809600 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1c69f22fe96a1056_ldcg-aarch64-02-7362bc75-2722052-616e59bcf75dd.tfrecord*. 2024-04-25 06:08:08.185052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025288.196910 2809603 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__bda52fb61f1d211e_ldcg-aarch64-02-ce5a7c43-2722052-616e59bcf4390.tfrecord*. 2024-04-25 06:08:09.195037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025289.256896 2809602 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c644946f37761cb5_ldcg-aarch64-02-83b1c9f1-2722052-616e59c779117.tfrecord*. 2024-04-25 06:08:10.205036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025290.598699 2809602 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c644946f37761cb5_ldcg-aarch64-02-83b1c9f1-2722052-616e59c779117.tfrecord*. 2024-04-25 06:08:11.205223: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:11.300208: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, stream: 1, compression: SNAPPY }. Stream 1, chunk 0, number of elements in chunk: 2543, chunk size: 34.7676KB. 2024-04-25 06:08:11.300685: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/checkpoints/checkpoint_4_2543. Checkpointing distributed tf.data snapshot writer took 425us 2024-04-25 06:08:11.301274: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_1/checkpoints 2024-04-25 06:08:11.301558: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, stream: 1, compression: SNAPPY } 2024-04-25 06:08:11.301842: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2457, chunk size: 33.5918KB. 2024-04-25 06:08:11.302264: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_0/checkpoints/checkpoint_4_2457. Checkpointing distributed tf.data snapshot writer took 391us 2024-04-25 06:08:11.302814: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0/streams/stream_0/checkpoints 2024-04-25 06:08:11.303082: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0, stream: 0, compression: SNAPPY } I0000 00:00:1714025291.415869 2874673 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_0 I0000 00:00:1714025291.516149 2874671 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4, created stream_0 and assigned to localhost:40851 2024-04-25 06:08:11.540363: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4, stream: 0, compression: SNAPPY } 2024-04-25 06:08:11.540921: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4, stream 0, chunk 0. I0000 00:00:1714025291.541097 2874673 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1, created stream_0 and assigned to localhost:38661 2024-04-25 06:08:11.615236: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1, stream: 0, compression: SNAPPY } 2024-04-25 06:08:11.860464: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1, stream 0, chunk 0. I0000 00:00:1714025291.995287 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:08:12.210379: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:13.245470: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025293.565098 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:08:14.265040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:15.275051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025295.583462 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:08:16.282941: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025296.643833 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:08:17.295047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025297.925823 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:08:18.305045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025298.975451 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:08:19.307750: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025299.985123 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:08:20.315043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:21.325025: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025302.295705 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:08:22.340295: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:23.340572: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:24.340743: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:25.342235: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025306.276071 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:08:26.345044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:27.356186: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:28.375063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025308.480530 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:08:29.445046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025309.615836 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:08:30.445220: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025310.850528 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:08:31.445404: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:32.465057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025312.686690 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:08:33.465219: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025314.081570 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:08:34.485039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025315.347255 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:08:35.485204: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:36.495048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:37.505074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:38.515044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:39.515250: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025319.678188 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:08:40.615046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:41.615266: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025321.625846 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:08:42.663899: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:43.664078: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:44.665048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:45.665250: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025326.628530 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:08:46.675044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:47.685039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025327.855028 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:08:48.691570: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025329.151023 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:08:49.691750: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:50.695054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025330.955104 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:08:51.702362: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:52.705171: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025333.635744 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:08:53.725043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:54.725463: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025335.075650 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:08:55.725646: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:56.731721: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:57.731896: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:58.735098: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025338.805491 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. I0000 00:00:1714025339.095284 3008446 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4]: 0/1 streams completed; 505/5000 splits assigned or completed. I0000 00:00:1714025339.155194 3016509 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1]: 0/1 streams completed; 473/5000 splits assigned or completed. 2024-04-25 06:08:59.735299: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:00.743962: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:01.750075: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025342.265077 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:09:02.765094: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:03.793748: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025344.445873 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:09:04.795099: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:05.805099: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:06.805271: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025346.985234 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:09:07.805461: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025348.339457 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:09:08.815063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025349.715923 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:09:09.825164: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:10.845067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:11.855048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025352.464967 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:09:12.865047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025353.545113 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:09:13.885053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:14.895052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025354.946293 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:15.895234: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025356.348455 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:09:16.905063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025357.435440 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:09:17.925049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025358.765466 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:18.935062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:19.936923: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:20.945053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025361.428254 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:09:21.955060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:22.955343: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:23.965060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025364.155044 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:09:24.965255: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025365.736508 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:25.976466: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025366.946961 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:26.976640: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025367.976933 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:27.977025: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:28.977199: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025369.015594 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:09:29.977376: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025370.276738 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:09:30.985062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:31.985240: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025372.222594 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:09:32.985424: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025373.277802 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:09:33.985619: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:34.985787: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025375.049688 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:35.995117: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025376.469938 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:09:37.005058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025377.948837 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:38.015123: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025378.956272 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:09:39.023787: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:40.023960: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025380.279994 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:41.024124: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025381.327300 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:09:42.024302: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:43.034229: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025383.342434 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:44.034399: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025385.017830 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:09:45.035450: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:46.045094: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025386.123890 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:47.055200: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025387.749823 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:48.061522: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:49.061734: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025389.140431 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:50.061940: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025390.305096 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:09:51.063543: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025391.645489 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:09:52.063714: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:53.065179: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025393.277728 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:09:54.065357: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025394.935350 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:55.065527: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025396.026654 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:09:56.066477: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:57.066647: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025397.925019 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:09:58.066829: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:59.075054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025399.121221 3147980 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4]: 0/1 streams completed; 784/5000 splits assigned or completed. I0000 00:00:1714025399.196925 3147857 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1]: 0/1 streams completed; 908/5000 splits assigned or completed. I0000 00:00:1714025399.314843 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:10:00.095055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025400.925662 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:10:01.095251: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:02.105059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025402.515958 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:10:03.111043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025404.038248 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:10:04.111234: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:05.115083: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025405.865143 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:10:06.115531: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:07.115694: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:08.115868: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025408.183155 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:10:09.125052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025409.532674 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:10:10.130363: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:11.130545: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025411.867939 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:10:12.135084: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:13.135282: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025413.335039 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:10:14.135451: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:15.145513: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:16.155052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:17.155307: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:18.157204: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:19.165262: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025420.035681 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:10:20.165456: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:21.185030: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:22.195041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:23.205048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:24.215062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:25.235067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025425.395042 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:10:26.245054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:27.255057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025428.075095 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:10:28.255352: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:29.265041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:30.275048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:31.275274: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025431.645439 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:10:32.285047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:33.289991: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:34.290169: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025435.185093 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:10:35.305076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:36.305330: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025436.378242 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:10:37.315056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:38.325059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:39.334136: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025439.405348 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:10:40.335233: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:41.345055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025442.336914 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:10:42.345381: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:43.365179: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025444.015690 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:10:44.375069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:45.385054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025445.407290 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:10:46.389848: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025446.515617 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:10:47.390016: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025448.095107 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:10:48.395063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025449.145750 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:10:49.395250: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:50.396103: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:51.396304: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025452.087310 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:10:52.396501: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:53.415071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:54.425064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025454.598945 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:10:55.443640: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025456.439866 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:10:56.445036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:57.455047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025457.576808 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:10:58.455385: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025458.576962 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. I0000 00:00:1714025459.171751 3248530 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4]: 0/1 streams completed; 1069/5000 splits assigned or completed. I0000 00:00:1714025459.208107 3249139 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1]: 0/1 streams completed; 1244/5000 splits assigned or completed. 2024-04-25 06:10:59.465073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025459.845257 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:11:00.485126: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:01.495055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025462.405989 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:11:02.505054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:03.515060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025464.525248 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:11:04.535066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:05.536628: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025466.236242 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:11:06.545053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:07.555104: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025467.653143 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:11:08.565341: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025469.109793 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:09.565741: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:10.565951: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:11.575063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:12.585062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:13.588834: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025473.866229 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:14.589224: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025475.132921 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:11:15.604695: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:16.604881: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025476.641105 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:11:17.615061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025477.806689 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:11:18.617250: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:19.625056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025480.099374 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:11:20.635061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:21.645052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025481.956490 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:22.651295: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:23.665055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025484.399865 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:11:24.675061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:25.675240: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025486.126891 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:26.675424: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:27.685060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025488.206513 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:11:28.685248: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:29.685424: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025489.956390 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:11:30.695697: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025491.695065 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:11:31.715045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:32.735055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025493.025031 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:33.755219: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025494.359186 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:34.755397: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:35.765059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025496.027220 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:11:36.775063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025497.186867 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:11:37.775241: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025498.326757 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:11:38.785045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:39.785761: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:40.795035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025501.295824 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:11:41.805042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025502.585330 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:11:42.815072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:43.825070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025503.936520 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:11:44.835091: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025504.967395 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:45.845370: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025506.145029 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:46.846442: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025507.538522 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:11:47.846619: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025508.553540 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:11:48.846786: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:49.855061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025510.594923 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:50.875054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025511.625981 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:11:51.885252: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:52.895092: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:53.895324: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025513.973315 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:11:54.898979: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025515.476725 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:11:55.905064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:56.905264: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025517.345025 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:11:57.915064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025518.845015 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:11:58.925049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025519.195263 3355840 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4]: 0/1 streams completed; 1474/5000 splits assigned or completed. I0000 00:00:1714025519.253969 3355556 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1]: 0/1 streams completed; 1661/5000 splits assigned or completed. 2024-04-25 06:11:59.935049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025520.237781 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:00.935250: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025521.446597 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:01.945050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:02.955055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:03.965068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025524.086355 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:04.965256: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:05.985058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025526.075031 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:06.995179: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:07.995351: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:09.005051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025529.226150 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:12:10.025057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:11.035071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025531.717950 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:12:12.055058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:13.075098: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:14.086658: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025534.789013 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:12:15.095076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:16.095249: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025536.610128 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:17.095739: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:18.105053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025538.668114 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:19.115079: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:20.125059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025540.336412 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:12:21.125253: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:22.135105: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025542.255036 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:12:23.145059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:24.148214: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025544.865020 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:12:25.155063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:26.155317: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025546.215037 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:12:27.165057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025547.315333 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:12:28.175052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025548.459193 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:12:29.237194: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:30.245120: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:31.265059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:32.265826: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025552.696111 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:33.266030: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:34.271867: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:35.285063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025555.642595 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:12:36.291374: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:37.295099: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:38.295987: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025558.493303 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:12:39.305310: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025560.288972 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:40.305699: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:41.314843: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025561.997461 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:42.315047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:43.315366: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:44.330227: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025564.408578 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:12:45.335122: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025565.517397 2875479 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7c19858fd6738d97_ldcg-aarch64-02-f7475df-2722052-616e59ca4d0c7.tfrecord*. 2024-04-25 06:12:46.335391: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:47.345068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:48.355062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025568.527496 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:12:49.375048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025569.617870 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:12:50.375753: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025570.688322 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:12:51.385067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025572.155625 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:12:52.395065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:53.405076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:54.425058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025575.053270 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:12:55.435064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025576.288298 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:12:56.475058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:57.525065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:58.559493: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025579.078662 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. I0000 00:00:1714025579.245675 3450522 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4]: 0/1 streams completed; 1865/5000 splits assigned or completed. I0000 00:00:1714025579.255997 3450580 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1]: 0/1 streams completed; 2021/5000 splits assigned or completed. 2024-04-25 06:12:59.585070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025580.170324 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:13:00.615064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025581.367706 2875478 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__19703a96855567be_ldcg-aarch64-02-87249430-2722052-616e59ca4cfa3.tfrecord*. 2024-04-25 06:13:01.635039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025582.397863 2876647 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d8b488cb5ab765f8_ldcg-aarch64-02-b82aea03-2722052-616e59ca882ed.tfrecord*. 2024-04-25 06:13:02.645086: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:03.665059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025584.476604 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:13:04.674405: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:05.674719: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025586.236911 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:13:06.685055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:07.701539: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:08.705062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:09.715072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025589.776951 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:13:10.725067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025590.992296 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:13:11.729296: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:12.730509: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:13.731959: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025593.807622 2876646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9a8c2a3f820a730_ldcg-aarch64-02-8a6261c9-2722052-616e59ca884a1.tfrecord*. 2024-04-25 06:13:13.809269: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2165, chunk size: 29.5996KB. 2024-04-25 06:13:13.816394: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2196, chunk size: 30.0234KB. 2024-04-25 06:13:14.735063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:15.745108: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:16.755104: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:17.785048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:18.425835: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/checkpoints/checkpoint_2_2165. Checkpointing distributed tf.data snapshot writer took 4.616504s 2024-04-25 06:13:18.785258: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:19.735380: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1, stream 0, chunk 2. 2024-04-25 06:13:19.785710: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025599.815294 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:13:20.795068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:21.815068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025602.006809 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:13:22.623655: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/checkpoints/checkpoint_2_2196. Checkpointing distributed tf.data snapshot writer took 8.807186s 2024-04-25 06:13:22.623960: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4, stream 0, chunk 2. 2024-04-25 06:13:22.835151: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025603.455575 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:13:23.845111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025604.688751 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:13:24.845307: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025605.710659 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:13:25.849547: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:26.855061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:27.860156: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025608.106233 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:13:28.865043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:29.875063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025610.046097 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:13:30.885159: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025611.085751 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:13:31.905076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025612.349067 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:13:32.945062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:33.955082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025614.875041 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:13:34.973862: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:35.975239: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025616.596615 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:13:37.005087: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025617.607842 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:13:38.025077: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025618.662176 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:13:39.045059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:40.055111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025620.773647 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:13:41.065069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:42.075053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025622.769688 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:13:43.095057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:44.100865: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025624.382206 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:13:45.115073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025626.016416 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:13:46.125052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:47.135069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025627.512973 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:13:48.225058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:49.225256: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025629.648727 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:13:50.235063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025631.028169 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:13:51.245069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025632.086543 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:13:52.255042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:53.314873: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025633.668261 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:13:54.315059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:55.325065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:56.335071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025636.366157 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:13:57.345061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:58.365051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025639.126384 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. I0000 00:00:1714025639.325653 3544115 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1]: 0/1 streams completed; 2693/5000 splits assigned or completed. I0000 00:00:1714025639.335328 3544149 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4]: 0/1 streams completed; 2528/5000 splits assigned or completed. 2024-04-25 06:13:59.395060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025640.126721 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:14:00.405071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025641.376616 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:01.434429: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:02.435045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025643.394196 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:14:03.435335: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:04.455087: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:05.465061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025646.270913 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:06.485064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025647.297965 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:14:07.505067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:08.515075: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:09.525047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:10.535068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025651.451134 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:11.545050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:12.549003: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:13.565084: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025653.645434 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:14:14.575062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025654.817290 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:15.585109: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:16.595068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:17.605058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025657.966489 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:14:18.615088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025659.618954 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:14:19.625092: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:20.635059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025661.280629 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:14:21.645061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025662.420032 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:14:22.655058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:23.659899: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025663.935610 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:24.675059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025664.959184 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:25.681090: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:26.681344: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025666.715436 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:27.685074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025667.959381 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:14:28.695062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:29.705056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025670.136132 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:14:30.715055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:31.718558: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:32.725053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025672.746902 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:14:33.735078: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025674.284738 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:34.735377: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025675.678043 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:14:35.745082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:36.745595: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025676.847813 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:37.745772: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025678.123083 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:38.775067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025679.176236 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:39.782302: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:40.783076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025681.287384 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:41.783279: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:42.785106: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025683.096333 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:14:43.795060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025684.289805 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:14:44.805046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025685.647172 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:45.815052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:46.825042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025686.898827 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:14:47.835054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025688.296038 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:14:48.845043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:49.855048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025690.555115 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:14:50.865038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:51.875062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025692.056076 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:14:52.915042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025693.118293 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:14:53.925051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025694.882193 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:14:54.945025: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:55.946285: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025696.155920 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:14:56.946592: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025697.845378 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:14:57.955044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:58.975051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025699.395518 3595895 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1]: 0/1 streams completed; 4062/5000 splits assigned or completed. I0000 00:00:1714025699.426196 3597227 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4]: 0/1 streams completed; 3795/5000 splits assigned or completed. I0000 00:00:1714025699.818892 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:14:59.976514: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:00.983551: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025701.546694 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:15:01.987575: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025702.976240 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:15:02.995052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:04.005088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025704.413493 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:15:05.005636: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025705.528063 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:15:06.015043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025706.540364 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:15:07.035047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025707.688846 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:15:08.037299: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:09.045140: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025709.257095 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:15:10.055045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:11.065104: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025711.429746 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:15:12.075041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025712.496492 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:15:13.085095: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025714.088131 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:15:14.125170: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:15.135044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025715.262244 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:15:16.145047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025716.262907 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4b113a0d683822d3_ldcg-aarch64-02-f7475df-2722052-616e5af02632b.tfrecord*. 2024-04-25 06:15:17.155041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025717.427867 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:15:18.165042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025718.437681 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__f67e7f33c00a9bdd_ldcg-aarch64-02-8a6261c9-2722052-616e5af0217d0.tfrecord*. 2024-04-25 06:15:19.180615: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025719.785068 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:15:20.185230: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025721.055519 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:15:21.195051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:22.205052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025722.509362 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__ead7fc3b6775e307_ldcg-aarch64-02-8a6261c9-2722052-616e5b619dd6e.tfrecord*. 2024-04-25 06:15:23.206270: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:24.215068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025724.837717 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__34f2cd3f57fe472_ldcg-aarch64-02-b82aea03-2722052-616e5af2e2e05.tfrecord*. 2024-04-25 06:15:25.225137: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:26.225712: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:27.226232: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025727.857999 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d62a34facbe85ba9_ldcg-aarch64-02-f7475df-2722052-616e5b6573fea.tfrecord*. 2024-04-25 06:15:28.235055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:29.245147: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025729.368103 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:15:30.255049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025730.799877 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__4d17473f15257aad_ldcg-aarch64-02-b4d01fc8-2722052-616e5af2e2a9c.tfrecord*. 2024-04-25 06:15:31.265047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:32.275044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025732.300306 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d62a34facbe85ba9_ldcg-aarch64-02-f7475df-2722052-616e5b6573fea.tfrecord*. 2024-04-25 06:15:33.285872: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025733.563879 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d62a34facbe85ba9_ldcg-aarch64-02-f7475df-2722052-616e5b6573fea.tfrecord*. 2024-04-25 06:15:34.295463: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025734.951500 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b182cff2be5b4d5b_ldcg-aarch64-02-b82aea03-2722052-616e5b6fecb2a.tfrecord*. 2024-04-25 06:15:35.305063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:36.375053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025737.298110 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b182cff2be5b4d5b_ldcg-aarch64-02-b82aea03-2722052-616e5b6fecb2a.tfrecord*. 2024-04-25 06:15:37.385081: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:38.395043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025739.113075 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__ead7fc3b6775e307_ldcg-aarch64-02-8a6261c9-2722052-616e5b619dd6e.tfrecord*. 2024-04-25 06:15:39.405048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025740.346065 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fcdf25232587a3d4_ldcg-aarch64-02-b4d01fc8-2722052-616e5b6fed7c3.tfrecord*. 2024-04-25 06:15:40.415116: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:41.425049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025742.266310 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d62a34facbe85ba9_ldcg-aarch64-02-f7475df-2722052-616e5b6573fea.tfrecord*. 2024-04-25 06:15:42.425341: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:43.435147: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:44.445044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025744.997238 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d62a34facbe85ba9_ldcg-aarch64-02-f7475df-2722052-616e5b6573fea.tfrecord*. 2024-04-25 06:15:45.445231: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:46.455049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:47.475035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025747.738360 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d62a34facbe85ba9_ldcg-aarch64-02-f7475df-2722052-616e5b6573fea.tfrecord*. 2024-04-25 06:15:48.485047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:49.495037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025750.187610 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__ead7fc3b6775e307_ldcg-aarch64-02-8a6261c9-2722052-616e5b619dd6e.tfrecord*. 2024-04-25 06:15:50.505070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:51.515048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025751.867236 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fcdf25232587a3d4_ldcg-aarch64-02-b4d01fc8-2722052-616e5b6fed7c3.tfrecord*. 2024-04-25 06:15:52.515253: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:53.515560: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025754.026843 3484772 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d62a34facbe85ba9_ldcg-aarch64-02-f7475df-2722052-616e5b6573fea.tfrecord*. 2024-04-25 06:15:54.525141: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:55.535038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:56.545040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025757.125356 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fcdf25232587a3d4_ldcg-aarch64-02-b4d01fc8-2722052-616e5b6fed7c3.tfrecord*. 2024-04-25 06:15:57.555048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:58.555239: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025759.433670 3652860 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1]: 0/1 streams completed; 4982/5000 splits assigned or completed. I0000 00:00:1714025759.433936 3652932 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4]: 0/1 streams completed; 4693/5000 splits assigned or completed. I0000 00:00:1714025759.519649 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b182cff2be5b4d5b_ldcg-aarch64-02-b82aea03-2722052-616e5b6fecb2a.tfrecord*. 2024-04-25 06:15:59.565061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:00.565335: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:01.585072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025761.857945 3484773 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__ead7fc3b6775e307_ldcg-aarch64-02-8a6261c9-2722052-616e5b619dd6e.tfrecord*. 2024-04-25 06:16:01.865213: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1, stream: 0, compression: SNAPPY }. Stream 0, chunk 2, number of elements in chunk: 2835, chunk size: 38.7598KB. 2024-04-25 06:16:01.985848: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/checkpoints/checkpoint_6_2835. Checkpointing distributed tf.data snapshot writer took 120.571ms 2024-04-25 06:16:02.595045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:02.636093: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1/streams/stream_0/checkpoints 2024-04-25 06:16:03.085822: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1, stream: 0, compression: SNAPPY } I0000 00:00:1714025763.165885 3656968 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_1 I0000 00:00:1714025763.257813 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fcdf25232587a3d4_ldcg-aarch64-02-b4d01fc8-2722052-616e5b6fed7c3.tfrecord*. I0000 00:00:1714025763.286026 3656968 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1714025763.286876 3656968 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8, created stream_0 and assigned to localhost:38661 2024-04-25 06:16:03.303966: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8, stream: 0, compression: SNAPPY } 2024-04-25 06:16:03.304538: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8, stream 0, chunk 0. 2024-04-25 06:16:03.595796: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:04.605041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025765.509910 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:16:05.605516: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:06.625094: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025766.738434 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:16:07.645049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025767.945018 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fcdf25232587a3d4_ldcg-aarch64-02-b4d01fc8-2722052-616e5b6fed7c3.tfrecord*. 2024-04-25 06:16:08.646799: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:09.655042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025770.353758 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:16:10.665036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:11.675055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025772.204932 3488802 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b182cff2be5b4d5b_ldcg-aarch64-02-b82aea03-2722052-616e5b6fecb2a.tfrecord*. 2024-04-25 06:16:12.685094: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:13.695054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025774.218916 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:16:14.705038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025775.327444 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fcdf25232587a3d4_ldcg-aarch64-02-b4d01fc8-2722052-616e5b6fed7c3.tfrecord*. 2024-04-25 06:16:15.705371: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:16.725035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025777.596595 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:16:17.735041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:18.735207: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025778.840757 3488801 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fcdf25232587a3d4_ldcg-aarch64-02-b4d01fc8-2722052-616e5b6fed7c3.tfrecord*. 2024-04-25 06:16:19.036483: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4, stream: 0, compression: SNAPPY }. Stream 0, chunk 2, number of elements in chunk: 2804, chunk size: 38.3359KB. 2024-04-25 06:16:19.036940: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/checkpoints/checkpoint_6_2804. Checkpointing distributed tf.data snapshot writer took 402us 2024-04-25 06:16:19.037693: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4/streams/stream_0/checkpoints 2024-04-25 06:16:19.037948: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4, stream: 0, compression: SNAPPY } I0000 00:00:1714025779.136006 3667611 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_4 I0000 00:00:1714025779.245703 3667611 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1714025779.405589 3667611 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7, created stream_0 and assigned to localhost:40851 2024-04-25 06:16:19.625262: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7, stream: 0, compression: SNAPPY } 2024-04-25 06:16:19.625860: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7, stream 0, chunk 0. 2024-04-25 06:16:19.755045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:20.755307: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025781.205424 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:16:21.765103: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025782.455297 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:16:22.805031: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025783.565598 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:16:23.815040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:24.825056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025785.576456 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:16:25.825301: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:26.835053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:27.875037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025788.480465 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:16:28.885042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025789.609882 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:16:29.895052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025790.896319 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:16:30.905045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:31.907610: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:32.911064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025793.230202 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:16:33.911366: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025794.257103 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:16:34.915050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:35.935061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025796.204137 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:16:36.955043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:37.965056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025798.116287 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:16:38.973231: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025799.386287 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:16:39.975038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025800.645752 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:16:40.978732: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:41.985046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025803.037164 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:16:43.045043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:44.055048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025805.047084 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:16:45.055712: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:46.065056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025806.365704 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:16:47.075047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:48.085049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025808.936095 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:16:49.095052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:50.105044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025810.898898 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:16:51.115054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:52.125125: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:53.125336: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025813.315478 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:16:54.135052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:55.143317: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025815.876591 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:16:56.145057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:57.155050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025817.262780 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:16:58.155259: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025818.819342 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:16:59.165047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:00.185055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:01.185537: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:02.195050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025822.538891 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:17:03.205084: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025823.306114 3705331 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8]: 0/1 streams completed; 518/5000 splits assigned or completed. 2024-04-25 06:17:04.217060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:05.325044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025825.681118 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:17:06.325716: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:07.335050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:08.355048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025828.937995 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:17:09.355250: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:10.356222: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:11.356807: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:12.365691: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025832.800157 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:17:13.375038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025834.258857 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:17:14.385031: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:15.395053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025835.585126 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:17:16.395261: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025837.331461 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:17:17.402824: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:18.403018: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025838.427436 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. I0000 00:00:1714025839.264802 3751796 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7]: 0/1 streams completed; 364/5000 splits assigned or completed. 2024-04-25 06:17:19.405040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025839.515209 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:17:20.415043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025840.795557 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:17:21.415218: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:22.425035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025842.433562 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:17:23.435054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:24.445041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025845.021072 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:17:25.455041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:26.475143: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025846.885582 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:17:27.495055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:28.515200: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:29.535046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025850.188798 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:17:30.556672: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:31.575068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025851.767884 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:17:32.605275: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025852.887450 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:17:33.606096: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:34.606254: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:35.615066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:36.625048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025857.097826 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:17:37.635067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025858.104449 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:17:38.635306: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025859.104836 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:17:39.655067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:40.665049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:41.675048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:42.685048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025862.970192 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:17:43.685958: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025864.199556 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:17:44.705046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:45.715062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025866.269102 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:17:46.725073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:47.735039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025867.808837 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:17:48.735206: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:49.735373: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025870.247313 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:17:50.736509: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:51.737120: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025872.019161 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:17:52.755040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025873.515628 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:17:53.756743: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:54.758316: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:55.765055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:56.785047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:57.805074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025878.075126 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:17:58.815068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025879.185119 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:17:59.825114: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025880.287938 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:18:00.830385: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:01.835129: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:02.845105: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025883.321261 3807896 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8]: 0/1 streams completed; 1257/5000 splits assigned or completed. 2024-04-25 06:18:03.855342: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025883.929133 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:18:04.865097: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:05.875108: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:06.885106: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:07.895102: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:08.905111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:09.915070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025890.315059 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:18:10.915272: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:11.925075: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025892.825088 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:18:12.935086: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:13.955109: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:14.985111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:15.995103: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025896.228379 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:18:17.005103: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025897.770352 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:18:18.015117: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:19.016214: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025899.317379 3814849 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7]: 0/1 streams completed; 788/5000 splits assigned or completed. 2024-04-25 06:18:20.025117: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:21.025386: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:22.035104: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:23.085103: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025903.600141 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:18:24.095091: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:25.105085: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025905.907060 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:18:26.105308: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:27.115124: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:28.125103: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025908.355018 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:18:29.135118: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:30.145099: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:31.155048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025911.695144 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:18:32.165044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025913.185025 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:18:33.185040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:34.195041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:35.205058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:36.225053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:37.235048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025917.675672 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:18:38.245050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:39.295050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:40.315040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:41.325038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:42.345043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025923.056130 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:18:43.365043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:44.375036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:45.383389: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:46.415232: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:47.425053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:48.435137: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:49.435887: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:50.445053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025931.235455 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:18:51.455039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025932.315489 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:18:52.465051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:53.485040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025934.125883 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:18:54.485545: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:55.498748: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025935.767149 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:18:56.515047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:57.535047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025937.575124 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:18:58.545165: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:59.555047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:00.585041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:01.585564: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025942.165093 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:02.595040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025943.415767 3834802 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8]: 0/1 streams completed; 1511/5000 splits assigned or completed. 2024-04-25 06:19:03.625056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:04.635036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:05.645050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:06.655049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025946.662785 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:19:07.665036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025947.731692 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:19:08.675061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025948.995327 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:09.685094: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025950.155578 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:19:10.700970: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:11.705042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:12.715046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025953.038547 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:19:13.725041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:14.725213: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025954.899600 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:15.735347: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025955.947351 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:16.755039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:17.765127: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025958.506225 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:19:18.775047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025959.355902 3847394 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7]: 0/1 streams completed; 1125/5000 splits assigned or completed. 2024-04-25 06:19:19.777849: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025960.475786 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:19:20.785032: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:21.785321: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:22.805045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025963.455587 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:23.805241: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:24.805408: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:25.805631: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025965.866389 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:19:26.815233: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025967.296853 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:27.825044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025968.675036 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:19:28.845055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:29.855039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:30.865041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025971.358218 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:19:31.875037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025972.778160 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:19:32.905059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:33.915048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:34.915364: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025975.115501 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:35.935058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025976.316546 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:36.945073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:37.945440: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:38.965038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:39.975052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025980.107171 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:40.981822: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025981.625161 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:19:41.982300: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:42.985038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:43.995025: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:44.997914: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025985.585867 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:19:46.001031: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:47.005039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025987.379174 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:19:48.015037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:49.025042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:50.045030: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025990.999468 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:19:51.045243: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:52.045632: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:53.055045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:54.065063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025994.925416 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:19:55.075053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:56.075395: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025996.678725 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:19:57.085058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:58.095049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:59.105052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:00.105402: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:01.115266: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:02.125069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:03.135107: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026003.526263 3869865 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8]: 0/1 streams completed; 2021/5000 splits assigned or completed. I0000 00:00:1714026003.709030 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:04.135280: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:05.145201: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026005.918888 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:06.145914: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:07.156267: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:08.175100: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026008.298924 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:20:09.176170: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026009.546165 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__47bf812f7debd4ca_ldcg-aarch64-02-dd740d35-2722052-616e5b8c1f840.tfrecord*. 2024-04-25 06:20:10.185519: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026010.546826 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:20:11.195043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:12.205046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:13.225045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026013.658775 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:20:14.225213: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026014.706542 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:15.265050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:16.265416: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026017.265783 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:20:17.273460: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:18.275105: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026018.615824 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:19.285045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026019.465417 3882210 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7]: 0/1 streams completed; 1742/5000 splits assigned or completed. 2024-04-25 06:20:20.295050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:21.305047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:22.325043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026022.406785 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4d03fafa6bfcf82e_ldcg-aarch64-02-dd740d35-2722052-616e5c8032695.tfrecord*. 2024-04-25 06:20:23.335042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026023.935033 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:20:24.336102: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:25.355066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026026.298132 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:20:26.425459: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:27.455044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:28.465046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:29.475043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026030.186362 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4d03fafa6bfcf82e_ldcg-aarch64-02-dd740d35-2722052-616e5c8032695.tfrecord*. 2024-04-25 06:20:30.475735: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:31.485041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:32.495048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:33.505191: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026033.920237 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:20:34.525045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:35.535049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:36.545049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026036.658223 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:37.560374: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:38.575232: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:39.585044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:40.595040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026040.610267 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__11459a1be68a1b80_ldcg-aarch64-02-351b679a-2722052-616e5b8c1f735.tfrecord*. 2024-04-25 06:20:41.605045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:42.615060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026043.212617 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:43.625053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:44.645035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026045.316950 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:45.695054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026046.486667 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:20:46.725040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:47.735055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:48.745206: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026049.418338 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1c68c916c2bc6646_ldcg-aarch64-02-351b679a-2722052-616e5c9ac5c6a.tfrecord*. 2024-04-25 06:20:49.765046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:50.775039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026051.006549 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:51.785039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026052.164850 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4d03fafa6bfcf82e_ldcg-aarch64-02-dd740d35-2722052-616e5c8032695.tfrecord*. 2024-04-25 06:20:52.795040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:53.795390: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026054.124652 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:20:54.795612: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026055.271431 3657747 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1c68c916c2bc6646_ldcg-aarch64-02-351b679a-2722052-616e5c9ac5c6a.tfrecord*. 2024-04-25 06:20:55.819287: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026056.771892 3657746 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__4d03fafa6bfcf82e_ldcg-aarch64-02-dd740d35-2722052-616e5c8032695.tfrecord*. 2024-04-25 06:20:56.835068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:57.845049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026057.867875 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:58.855045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026059.165144 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:20:59.855219: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:00.865046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:01.875047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:02.885048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026063.609760 3915442 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8]: 0/1 streams completed; 2726/5000 splits assigned or completed. I0000 00:00:1714026063.625028 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:21:03.895047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:04.297828: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2727, chunk size: 37.2832KB. 2024-04-25 06:21:04.347546: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/checkpoints/checkpoint_4_2727. Checkpointing distributed tf.data snapshot writer took 49.646ms 2024-04-25 06:21:04.347977: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8, stream 0, chunk 4. 2024-04-25 06:21:04.905045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026064.928970 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:21:05.915043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:06.925042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026067.585995 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:21:07.945043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:08.947998: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:09.965045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026070.797008 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:21:10.967006: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:11.975034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026072.006860 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:21:12.985034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026073.315020 3668946 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a198ad5a73bd0e8_ldcg-aarch64-02-dfa7dd2c-2722052-616e5b9bb24d3.tfrecord*. 2024-04-25 06:21:13.985239: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026074.325735 3668945 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6d7da730f80a8850_ldcg-aarch64-02-b82aea03-2722052-616e5b9bb0a3d.tfrecord*. 2024-04-25 06:21:14.995050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026075.401030 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:21:16.005053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026076.983208 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:21:17.015037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:18.025044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026078.135062 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:21:19.035054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026079.501370 3938699 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7]: 0/1 streams completed; 2286/5000 splits assigned or completed. 2024-04-25 06:21:20.055038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026080.627900 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:21:20.630771: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2287, chunk size: 31.2676KB. 2024-04-25 06:21:21.062464: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:21.306037: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/checkpoints/checkpoint_2_2287. Checkpointing distributed tf.data snapshot writer took 675.199ms 2024-04-25 06:21:22.062966: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:22.552284: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7, stream 0, chunk 2. I0000 00:00:1714026082.555249 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. 2024-04-25 06:21:23.063253: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:24.075028: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026084.220177 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:21:25.085069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:26.095053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026086.168217 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. 2024-04-25 06:21:27.095209: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026087.168377 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:21:28.095475: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:29.105068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:30.125019: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:31.135065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:32.145040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026092.786648 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:21:33.185044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:34.185214: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:35.195037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:36.205043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026096.988380 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. 2024-04-25 06:21:37.215329: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:38.225080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026098.594083 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:21:39.245080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:40.265043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026100.831305 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:21:41.275052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:42.275589: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026103.061762 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:21:43.295054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:44.305052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:45.305618: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026105.679357 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:21:46.315046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:47.325055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026107.387362 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. 2024-04-25 06:21:48.335039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:49.345047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:50.355042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026110.568044 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:21:51.365047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:52.375040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026113.092181 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. 2024-04-25 06:21:53.375396: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:54.395137: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:55.405045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026116.317553 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. 2024-04-25 06:21:56.417745: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:57.425910: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026118.085451 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:21:58.435046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:59.445048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026119.638614 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:22:00.455040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:01.465148: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026121.466467 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:02.475060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:03.476201: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026123.615469 3971490 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8]: 0/1 streams completed; 3990/5000 splits assigned or completed. I0000 00:00:1714026123.756111 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. 2024-04-25 06:22:04.485045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:05.485371: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:06.495040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026126.515293 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:22:07.505038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026127.635071 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:08.508960: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026128.718143 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:22:09.515066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026130.036267 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:22:10.515243: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026131.128044 3920839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__b2a6f6c60bbfc831_ldcg-aarch64-02-351b679a-2722052-616e5cab383e9.tfrecord*. 2024-04-25 06:22:11.542109: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:12.548986: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026133.072475 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. 2024-04-25 06:22:13.555052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:14.575048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026134.675545 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:15.585043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026135.815363 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. 2024-04-25 06:22:16.635054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026136.936928 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:17.645046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:18.655048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026139.515454 3987714 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7]: 0/1 streams completed; 3423/5000 splits assigned or completed. 2024-04-25 06:22:19.665071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:20.675052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026141.084168 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:21.725044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:22.735007: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026143.128212 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:22:23.745059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:24.755048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:25.775042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026146.487942 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:22:26.785053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:27.795048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026148.607350 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:28.805037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:29.825050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:30.835047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026150.979906 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:31.845066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:32.915051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026153.081142 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:33.925050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026154.922632 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:22:34.935042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:36.005046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026156.325909 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:37.105901: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:38.111292: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026159.032469 3942518 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__dffefc9927ee546c_ldcg-aarch64-02-ce5a7c43-2722052-616e5cbc97ba7.tfrecord*. 2024-04-25 06:22:39.115041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026160.033120 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:22:40.115974: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026161.039827 3920840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__2fb8a4fe128a4213_ldcg-aarch64-02-dd740d35-2722052-616e5cab384b3.tfrecord*. 2024-04-25 06:22:41.125058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026162.041716 3942519 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/8f41aabf6b170ccd830d67cf1bc9647f2cipb_ei/tmpgm1jqzm6/tmpj95b08tk/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__fa136dc40b8ad310_ldcg-aarch64-02-21abc122-2722052-616e5cbc95497.tfrecord*. -- Test timed out at 2024-04-25 06:22:42 UTC -- Current thread 0x0000ffff9f177420 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 92 in wait_for_snapshot File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 323 in testWorkersDontExceedMaxStreamAssignments File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343 in execute_test_method File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360 in decorated File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/parameterized.py", line 314 in bound_param_test File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 579 in _callTestMethod File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 623 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 678 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/runner.py", line 217 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 274 in runTests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 102 in __init__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2537 in _run_and_get_tests_result File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2568 in run_tests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2156 in _run_in_app File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2049 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 51 in g_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/app.py", line 258 in _run_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/app.py", line 312 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 60 in main_wrapper File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/benchmark.py", line 489 in benchmarks_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 62 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/test.py", line 53 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 534 in ================================================================================ ==================== Test output for //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_ft_test (shard 1 of 17): 2024-04-25 06:07:22.089018: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.6: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/python_aarch64-unknown-linux-gnu/bin/python3 [ RUN ] SnapshotFtTest.testDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_1_numworkers_1 [ SKIPPED ] SnapshotFtTest.testDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_1_numworkers_1 [ RUN ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_graph_tfapiversion_1_numsources_1_numworkers_3 [ SKIPPED ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_graph_tfapiversion_1_numsources_1_numworkers_3 [ RUN ] SnapshotFtTest.testNestedDataset_test_mode_eager_tfapiversion_2_numworkers_1 2024-04-25 06:07:25.518927: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp32jxogl_/tf_data_dispatcher_journal 2024-04-25 06:07:25.519004: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:25.519935: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp32jxogl_" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:25.519975: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:37153 2024-04-25 06:07:25.519987: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:25.522271: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:37153. Worker config: protocol: "grpc" dispatcher_address: "localhost:37153" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:25.522478: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:38779 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1714025246.047722 2608947 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot I0000 00:00:1714025246.457368 2608947 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot 2024-04-25 06:07:26.475271: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 38779 I0000 00:00:1714025246.565270 2608952 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot, created stream_0 and assigned to localhost:38779 2024-04-25 06:07:26.567027: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 37153 2024-04-25 06:07:26.567871: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp32jxogl_/tf_data_dispatcher_journal 2024-04-25 06:07:26.568143: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 63us. I0000 00:00:1714025246.617513 2617180 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot 2024-04-25 06:07:26.617792: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 37153 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp32jxogl_" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:26.617874: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:37153 2024-04-25 06:07:26.618335: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:26.626188: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:26.626713: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot, stream 0, chunk 0. I0000 00:00:1714025247.031175 2617672 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__70989c334e7b2bea_ldcg-aarch64-02-78b00a03-2575442-616e599f61440.tfrecord*. 2024-04-25 06:07:27.618654: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025248.031388 2617672 parallel_tfrecord_writer.cc:167] Writing TFRecord of 10B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__70989c334e7b2bea_ldcg-aarch64-02-78b00a03-2575442-616e599f61440.tfrecord*. 2024-04-25 06:07:28.618826: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:28.939393: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 4950, chunk size: 48.3398KB. 2024-04-25 06:07:28.940168: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_4_4950. Checkpointing distributed tf.data snapshot writer took 716us 2024-04-25 06:07:29.026005: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:07:29.026400: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:29.273968: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:37153. Worker config: port: 38779 protocol: "grpc" dispatcher_address: "localhost:37153" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:29.274030: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:113] Distributed tf.data snapshot stream has already been completed for SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:29.274183: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:38779 I0000 00:00:1714025249.385701 2617363 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrayjt87/tmp76aak2iq/tf_data_snapshot 2024-04-25 06:07:29.477765: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 38779 2024-04-25 06:07:29.483714: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 37153 [ OK ] SnapshotFtTest.testNestedDataset_test_mode_eager_tfapiversion_2_numworkers_1 [ RUN ] SnapshotFtTest.testNonrepeatedDatasetDoesntProduceSecondRepetitionDir_test_mode_graph_tfapiversion_1_numsources_3_numworkers_5 [ SKIPPED ] SnapshotFtTest.testNonrepeatedDatasetDoesntProduceSecondRepetitionDir_test_mode_graph_tfapiversion_1_numsources_3_numworkers_5 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_2_numelements_1000_numrepetitions_10_numworkers_1 2024-04-25 06:07:29.495647: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpg6zenmsz/tf_data_dispatcher_journal 2024-04-25 06:07:29.495733: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:29.496052: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpg6zenmsz" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:29.496086: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:36265 2024-04-25 06:07:29.496098: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:29.498162: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:36265. Worker config: protocol: "grpc" dispatcher_address: "localhost:36265" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:29.498340: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:42935 I0000 00:00:1714025249.509616 2633631 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot I0000 00:00:1714025249.547617 2633631 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot 2024-04-25 06:07:29.549229: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 42935 2024-04-25 06:07:29.576078: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 36265 2024-04-25 06:07:29.576822: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpg6zenmsz/tf_data_dispatcher_journal 2024-04-25 06:07:29.577020: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 70us. I0000 00:00:1714025249.596583 2633895 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot 2024-04-25 06:07:29.596839: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 36265 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpg6zenmsz" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:29.596935: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:36265 2024-04-25 06:07:29.596971: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025249.600385 2633983 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, created stream_0 and assigned to localhost:42935 2024-04-25 06:07:29.616978: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:36265. Worker config: port: 42935 protocol: "grpc" dispatcher_address: "localhost:36265" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:29.617201: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:42935 2024-04-25 06:07:29.635127: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:29.635713: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, stream 0, chunk 0. I0000 00:00:1714025249.675304 2634238 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a700cfdcad87b9ea_ldcg-aarch64-02-e5b351c4-2575442-616e59a23fe82.tfrecord*. 2024-04-25 06:07:30.601563: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025250.677448 2634237 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8ac209b6561c5043_ldcg-aarch64-02-6ac5eab-2575442-616e59a24713d.tfrecord*. I0000 00:00:1714025250.758799 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 1. I0000 00:00:1714025250.762581 2641107 snapshot_manager.cc:775] Starting repetition_1 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, source 0 2024-04-25 06:07:31.605188: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025251.685041 2634238 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a700cfdcad87b9ea_ldcg-aarch64-02-e5b351c4-2575442-616e59a23fe82.tfrecord*. I0000 00:00:1714025251.926288 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 2. I0000 00:00:1714025251.937873 2651168 snapshot_manager.cc:775] Starting repetition_2 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, source 0 2024-04-25 06:07:32.607449: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025252.707131 2634238 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2f525cbffca0deed_ldcg-aarch64-02-e5b351c4-2575442-616e59a51680f.tfrecord*. I0000 00:00:1714025253.604638 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 3. 2024-04-25 06:07:33.607606: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025253.608497 2656211 snapshot_manager.cc:775] Starting repetition_3 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, source 0 I0000 00:00:1714025253.707604 2634238 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2f525cbffca0deed_ldcg-aarch64-02-e5b351c4-2575442-616e59a51680f.tfrecord*. 2024-04-25 06:07:34.607774: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025254.776171 2634238 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2f525cbffca0deed_ldcg-aarch64-02-e5b351c4-2575442-616e59a51680f.tfrecord*. I0000 00:00:1714025255.206834 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 4. I0000 00:00:1714025255.210756 2661501 snapshot_manager.cc:775] Starting repetition_4 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, source 0 2024-04-25 06:07:35.607960: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025255.779024 2634237 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d0f83cb1708399fc_ldcg-aarch64-02-6ac5eab-2575442-616e59a513859.tfrecord*. I0000 00:00:1714025256.255351 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 5. I0000 00:00:1714025256.267061 2678110 snapshot_manager.cc:775] Starting repetition_5 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, source 0 2024-04-25 06:07:36.625045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025256.779672 2634238 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__445bce97a4501a10_ldcg-aarch64-02-e5b351c4-2575442-616e59a856b88.tfrecord*. I0000 00:00:1714025256.958327 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 6. I0000 00:00:1714025256.977193 2678110 snapshot_manager.cc:775] Starting repetition_6 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, source 0 I0000 00:00:1714025257.468930 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 7. I0000 00:00:1714025257.471994 2686518 snapshot_manager.cc:775] Starting repetition_7 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, source 0 2024-04-25 06:07:37.635043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025257.810864 2634238 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3711a27a3ff65e3e_ldcg-aarch64-02-e5b351c4-2575442-616e59a9d3dad.tfrecord*. I0000 00:00:1714025257.951341 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 8. I0000 00:00:1714025257.954721 2688139 snapshot_manager.cc:775] Starting repetition_8 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, source 0 I0000 00:00:1714025258.466173 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 9. I0000 00:00:1714025258.469343 2693739 snapshot_manager.cc:775] Starting repetition_9 for snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, source 0 2024-04-25 06:07:38.645052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025258.811184 2634238 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3711a27a3ff65e3e_ldcg-aarch64-02-e5b351c4-2575442-616e59a9d3dad.tfrecord*. I0000 00:00:1714025259.168768 2634241 snapshot_split_provider.cc:222] Reset tf.data snapshot split provider for snapshot base_path: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot" num_sources: 1 metadata { element_spec: "\212\002\004\022\000\030\t" compression: "SNAPPY" }, repetition 10. 2024-04-25 06:07:39.169565: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 10000, chunk size: 136.719KB. 2024-04-25 06:07:39.170110: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_10_10000. Checkpointing distributed tf.data snapshot writer took 492us 2024-04-25 06:07:39.171230: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:07:39.171584: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot, stream: 0, compression: SNAPPY } I0000 00:00:1714025259.285993 2702617 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_bj6qb99/tmpury62b21/tf_data_snapshot 2024-04-25 06:07:39.645229: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:40.655036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:41.655189: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:42.655360: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:43.664515: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:44.665079: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:44.907010: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence 2024-04-25 06:07:44.907959: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence 2024-04-25 06:07:45.111297: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 42935 2024-04-25 06:07:45.126110: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 36265 [ OK ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_eager_tfapiversion_2_numelements_1000_numrepetitions_10_numworkers_1 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numelements_1_numrepetitions_10_numworkers_3 [ SKIPPED ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numelements_1_numrepetitions_10_numworkers_3 [ RUN ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_2_numelements_2_numrepetitions_1_numworkers_1 2024-04-25 06:07:46.123675: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_0vodm6c/tf_data_dispatcher_journal 2024-04-25 06:07:46.123761: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:46.124070: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_0vodm6c" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:46.124108: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:36669 2024-04-25 06:07:46.124126: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:46.126460: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:36669. Worker config: protocol: "grpc" dispatcher_address: "localhost:36669" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:46.126657: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41953 WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. W0425 06:07:46.137159 281473048015904 deprecation.py:50] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. 2024-04-25 06:07:46.143166: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled I0000 00:00:1714025266.157855 2744744 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot I0000 00:00:1714025266.188019 2744744 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot 2024-04-25 06:07:46.193425: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 41953 2024-04-25 06:07:46.194904: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 36669 2024-04-25 06:07:46.195655: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_0vodm6c/tf_data_dispatcher_journal 2024-04-25 06:07:46.195865: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 68us. I0000 00:00:1714025266.210852 2745273 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot 2024-04-25 06:07:46.211097: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 36669 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp_0vodm6c" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:46.211192: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:36669 2024-04-25 06:07:46.211211: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025266.214279 2745494 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot, created stream_0 and assigned to localhost:41953 2024-04-25 06:07:46.231625: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:36669. Worker config: port: 41953 protocol: "grpc" dispatcher_address: "localhost:36669" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:46.231694: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:07:46.231849: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41953 2024-04-25 06:07:46.232303: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot, stream 0, chunk 0. I0000 00:00:1714025266.246202 2745682 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ea8ec768f8802818_ldcg-aarch64-02-35fe0e74-2575442-616e59b213ccd.tfrecord*. 2024-04-25 06:07:46.247587: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2, chunk size: 28B. 2024-04-25 06:07:46.248150: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot/streams/stream_0/checkpoints/checkpoint_2_2. Checkpointing distributed tf.data snapshot writer took 511us 2024-04-25 06:07:46.248564: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot/streams/stream_0/checkpoints 2024-04-25 06:07:46.248869: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot, stream: 0, compression: SNAPPY } I0000 00:00:1714025266.332759 2745259 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpvrkvgqbm/tmpwagwr0yc/tf_data_snapshot 2024-04-25 06:07:46.509208: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence [[{{node IteratorGetNext}}]] 2024-04-25 06:07:46.939102: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 41953 2024-04-25 06:07:46.961031: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 36669 [ OK ] SnapshotFtTest.testRepeatedDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_2_numelements_2_numrepetitions_1_numworkers_1 [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSourceName_test_mode_graph_tfapiversion_2_badsourcedirname_sourcex 2024-04-25 06:07:46.992258: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpeudr5gga/tf_data_dispatcher_journal 2024-04-25 06:07:46.992348: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:46.992655: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpeudr5gga" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:46.992679: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45609 I0000 00:00:1714025267.047424 2751795 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpx4vxxt05/tmpzdv2fpkk/tf_data_snapshot I0000 00:00:1714025267.129032 2751795 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpx4vxxt05/tmpzdv2fpkk/tf_data_snapshot 2024-04-25 06:07:47.181909: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45609 2024-04-25 06:07:47.182676: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpeudr5gga/tf_data_dispatcher_journal 2024-04-25 06:07:47.182857: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 39us. 2024-04-25 06:07:47.335080: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45609 [ OK ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSourceName_test_mode_graph_tfapiversion_2_badsourcedirname_sourcex [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSplitNames_test_mode_graph_tfapiversion_2_badsplitfilename_split01 2024-04-25 06:07:47.340944: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpmb9hndya/tf_data_dispatcher_journal 2024-04-25 06:07:47.341016: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:47.375191: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpmb9hndya" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:47.375257: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:45597 2024-04-25 06:07:47.375276: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive I0000 00:00:1714025267.399978 2753378 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp96j0bi_a/tmp6v219yql/tf_data_snapshot I0000 00:00:1714025267.771425 2753378 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmp96j0bi_a/tmp6v219yql/tf_data_snapshot 2024-04-25 06:07:48.015346: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45597 2024-04-25 06:07:48.016136: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpmb9hndya/tf_data_dispatcher_journal 2024-04-25 06:07:48.016327: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 49us. 2024-04-25 06:07:48.234497: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:48.235108: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 45597 [ OK ] SnapshotFtTest.testSnapshotRecoveryFailsWithBadSplitNames_test_mode_graph_tfapiversion_2_badsplitfilename_split01 [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithDuplicateGlobalIndexInSplitName_test_mode_eager_tfapiversion_2 2024-04-25 06:07:48.300700: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpkbf72pjc/tf_data_dispatcher_journal 2024-04-25 06:07:48.300775: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:48.301089: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpkbf72pjc" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:07:48.301115: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:46479 2024-04-25 06:07:48.303267: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive I0000 00:00:1714025268.307399 2759826 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpedblpb4p/tmpvqjgtu7k/tf_data_snapshot I0000 00:00:1714025268.327069 2759826 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpedblpb4p/tmpvqjgtu7k/tf_data_snapshot 2024-04-25 06:07:48.566407: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 46479 2024-04-25 06:07:48.567222: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpkbf72pjc/tf_data_dispatcher_journal 2024-04-25 06:07:48.567383: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 38us. 2024-04-25 06:07:48.751273: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:48.751702: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 46479 [ OK ] SnapshotFtTest.testSnapshotRecoveryFailsWithDuplicateGlobalIndexInSplitName_test_mode_eager_tfapiversion_2 [ RUN ] SnapshotFtTest.testSnapshotRecoveryFailsWithOutOfOrderSplitName_test_mode_graph_tfapiversion_1 [ SKIPPED ] SnapshotFtTest.testSnapshotRecoveryFailsWithOutOfOrderSplitName_test_mode_graph_tfapiversion_1 [ RUN ] SnapshotFtTest.testWorkersDontExceedMaxStreamAssignments_test_mode_graph_tfapiversion_2_workermaxconcurrentsnapshots_2 2024-04-25 06:07:48.776007: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpfwxeua3v/tf_data_dispatcher_journal 2024-04-25 06:07:48.776088: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:07:48.776372: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpfwxeua3v" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 2 2024-04-25 06:07:48.776411: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38477 2024-04-25 06:07:48.776438: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:07:48.778586: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38477. Worker config: protocol: "grpc" dispatcher_address: "localhost:38477" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:48.778760: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:34683 2024-04-25 06:07:48.780534: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38477. Worker config: protocol: "grpc" dispatcher_address: "localhost:38477" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:07:48.780718: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:40545 I0000 00:00:1714025268.817110 2761256 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0 I0000 00:00:1714025268.841394 2761256 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0 I0000 00:00:1714025268.865606 2761256 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1 I0000 00:00:1714025268.927878 2761256 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1 I0000 00:00:1714025268.929863 2761259 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1, created stream_0 and assigned to localhost:40545 I0000 00:00:1714025268.952662 2761945 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0, created stream_0 and assigned to localhost:34683 I0000 00:00:1714025268.959750 2761979 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2 2024-04-25 06:07:48.971704: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1, stream: 0, compression: SNAPPY } 2024-04-25 06:07:48.972257: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1, stream 0, chunk 0. 2024-04-25 06:07:48.976157: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0, stream: 0, compression: SNAPPY } 2024-04-25 06:07:48.976679: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0, stream 0, chunk 0. I0000 00:00:1714025269.009238 2761979 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2 I0000 00:00:1714025269.016005 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b6e474b4cb9adb6b_ldcg-aarch64-02-1dd91bf4-2575442-616e59b4b0bc2.tfrecord*. I0000 00:00:1714025269.084462 2763327 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3 I0000 00:00:1714025269.084798 2763326 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2, created stream_0 and assigned to localhost:40545 2024-04-25 06:07:49.165438: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2, stream: 0, compression: SNAPPY } 2024-04-25 06:07:49.166100: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2, stream 0, chunk 0. I0000 00:00:1714025269.170689 2763327 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3 I0000 00:00:1714025269.171880 2763335 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3, created stream_0 and assigned to localhost:34683 I0000 00:00:1714025269.221141 2763326 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_4 2024-04-25 06:07:49.230082: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3, stream: 0, compression: SNAPPY } 2024-04-25 06:07:49.230677: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3, stream 0, chunk 0. I0000 00:00:1714025269.297216 2763326 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_4 I0000 00:00:1714025269.336909 2764498 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5 I0000 00:00:1714025269.387043 2764498 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5 I0000 00:00:1714025269.407430 2764708 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_6 I0000 00:00:1714025269.507193 2764708 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_6 I0000 00:00:1714025269.545911 2765774 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7 I0000 00:00:1714025269.907458 2765774 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7 2024-04-25 06:07:49.909420: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:07:49.909460: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:34683 due to timeout I0000 00:00:1714025270.086234 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__137aa77a6a67db92_ldcg-aarch64-02-1253d3d3-2575442-616e59b520d77.tfrecord*. I0000 00:00:1714025270.234843 2765774 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8 I0000 00:00:1714025270.277568 2765774 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8 I0000 00:00:1714025270.380482 2768653 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9 I0000 00:00:1714025270.397397 2768653 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9 2024-04-25 06:07:50.575181: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 38477 2024-04-25 06:07:50.575921: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpfwxeua3v/tf_data_dispatcher_journal 2024-04-25 06:07:50.576192: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 98us. 2024-04-25 06:07:50.625881: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:07:50.625928: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714025270.744461 2769222 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9 I0000 00:00:1714025270.756513 2769216 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3 I0000 00:00:1714025270.762042 2769217 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5 I0000 00:00:1714025270.780002 2769221 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7 I0000 00:00:1714025270.780504 2769214 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0 I0000 00:00:1714025270.799445 2769219 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8 I0000 00:00:1714025270.813228 2769213 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1 I0000 00:00:1714025270.818239 2769215 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2 I0000 00:00:1714025270.824616 2769220 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_4 I0000 00:00:1714025270.939562 2769218 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_6 2024-04-25 06:07:50.955341: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 38477 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpfwxeua3v" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 2 2024-04-25 06:07:50.955450: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38477 2024-04-25 06:07:50.975048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025271.480669 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1f874777dd19c8af_ldcg-aarch64-02-a6d49c0e-2575442-616e59b4b1cf1.tfrecord*. 2024-04-25 06:07:51.975928: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025272.480782 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__be9c479f494ea787_ldcg-aarch64-02-4dc271a-2575442-616e59b4b0c61.tfrecord*. 2024-04-25 06:07:52.985085: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025273.481364 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1f874777dd19c8af_ldcg-aarch64-02-a6d49c0e-2575442-616e59b4b1cf1.tfrecord*. 2024-04-25 06:07:53.986191: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025274.485509 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__9a1d5152e165b289_ldcg-aarch64-02-13398469-2575442-616e59b4efdf9.tfrecord*. 2024-04-25 06:07:54.986363: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025275.499064 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b6e474b4cb9adb6b_ldcg-aarch64-02-1dd91bf4-2575442-616e59b4b0bc2.tfrecord*. 2024-04-25 06:07:55.995044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025276.499838 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1d9a2d3848b6bb63_ldcg-aarch64-02-24da0549-2575442-616e59b4efdd3.tfrecord*. 2024-04-25 06:07:56.995225: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025277.500239 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__e66035e295b92dab_ldcg-aarch64-02-8fd05898-2575442-616e59b4efd1e.tfrecord*. 2024-04-25 06:07:57.995880: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025278.501004 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__e66035e295b92dab_ldcg-aarch64-02-8fd05898-2575442-616e59b4efd1e.tfrecord*. 2024-04-25 06:07:59.005278: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:00.015059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:00.015134: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:34683 due to timeout I0000 00:00:1714025280.057019 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1d9a2d3848b6bb63_ldcg-aarch64-02-24da0549-2575442-616e59b4efdd3.tfrecord*. 2024-04-25 06:08:01.015261: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025281.058363 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7402bcf57ab6de_ldcg-aarch64-02-9c85aac4-2575442-616e59b4b1cee.tfrecord*. 2024-04-25 06:08:02.043805: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025282.058691 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:03.043979: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025283.058844 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:08:04.045070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025284.065040 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:08:05.045236: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025285.082269 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:06.045388: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025286.082717 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:08:07.065060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025287.085600 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:08:08.075040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025288.096823 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:08:09.075217: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025289.105728 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:10.095039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025290.598206 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:11.105059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025291.991533 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__805ea45e4335153_ldcg-aarch64-02-1253d3d3-2575442-616e59c01c601.tfrecord*. 2024-04-25 06:08:12.105347: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:13.105526: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025293.546606 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:14.105838: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:15.106018: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025295.582940 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:08:16.106212: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025296.635835 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:08:17.111758: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025297.922007 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:08:18.128896: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025298.975948 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:19.129079: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025299.980506 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__805ea45e4335153_ldcg-aarch64-02-1253d3d3-2575442-616e59c01c601.tfrecord*. 2024-04-25 06:08:20.135050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:21.135317: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025301.876347 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:22.145051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025303.025456 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__12d56cfa11bc694d_ldcg-aarch64-02-13398469-2575442-616e59bea3fe0.tfrecord*. 2024-04-25 06:08:23.146429: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:24.165050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:25.195191: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:26.195368: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025306.285627 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:27.196252: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025307.846447 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:28.205041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025308.857126 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__12d56cfa11bc694d_ldcg-aarch64-02-13398469-2575442-616e59bea3fe0.tfrecord*. 2024-04-25 06:08:29.208311: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025310.166385 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:30.208512: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025311.201322 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:31.215065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:32.235073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025312.658999 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:08:33.235795: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025314.016787 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:08:34.245059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025315.056114 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:08:35.249150: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:36.267175: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025316.706177 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:37.303489: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:38.315112: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:39.325084: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025319.677844 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:40.335062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:41.355185: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025321.576714 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:42.355701: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025322.579307 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:43.355894: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:44.356703: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:45.365049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:46.385058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025326.628047 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:47.395059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025327.855028 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:48.400711: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025329.152389 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:08:49.400906: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:50.475065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025330.647218 2994623 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1]: 0/1 streams completed; 4049/5000 splits assigned or completed. I0000 00:00:1714025330.647332 2994623 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2]: 0/1 streams completed; 4321/5000 splits assigned or completed. I0000 00:00:1714025330.665710 2994478 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0]: 0/1 streams completed; 4130/5000 splits assigned or completed. I0000 00:00:1714025330.665819 2994478 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3]: 0/1 streams completed; 4191/5000 splits assigned or completed. I0000 00:00:1714025330.940649 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:51.495058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025332.056479 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:08:52.495324: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:53.495531: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025333.636420 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:08:54.505330: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025335.272060 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:08:55.515057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:56.525056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:57.525242: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:08:58.535055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025338.617663 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:08:59.545053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:00.585057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:01.595074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025342.255603 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:09:02.605059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:03.615989: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025344.197572 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:04.616678: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:05.635087: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025345.666040 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:09:06.645065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025346.974935 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:07.655058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025348.335396 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:08.665168: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:09.675056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025349.778885 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:09:10.685053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:11.688671: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025352.208137 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:09:12.688840: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025353.539132 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:13.695047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:14.695653: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025354.945243 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:15.695858: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025355.965791 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:16.700853: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025357.249018 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:09:17.715089: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025358.310367 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:18.715344: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025359.413766 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:09:19.725061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:20.735049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025361.494293 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__805ea45e4335153_ldcg-aarch64-02-1253d3d3-2575442-616e59c01c601.tfrecord*. 2024-04-25 06:09:21.735246: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:22.745059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025363.035464 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__12d56cfa11bc694d_ldcg-aarch64-02-13398469-2575442-616e59bea3fe0.tfrecord*. 2024-04-25 06:09:23.745458: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025364.085024 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:09:24.745632: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025365.738157 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__12d56cfa11bc694d_ldcg-aarch64-02-13398469-2575442-616e59bea3fe0.tfrecord*. 2024-04-25 06:09:25.747159: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:26.747386: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025366.861579 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:09:27.747573: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025368.035136 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:09:28.751732: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025369.165249 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:09:29.751899: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025370.252065 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:09:30.752085: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:31.753655: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025372.222432 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:09:32.753838: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025373.285081 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:09:33.754019: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:34.755055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025374.995690 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:09:35.759759: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025376.151083 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__805ea45e4335153_ldcg-aarch64-02-1253d3d3-2575442-616e59c01c601.tfrecord*. 2024-04-25 06:09:36.759933: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025377.258869 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:09:37.760219: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025378.296073 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:38.795051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025379.466895 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__805ea45e4335153_ldcg-aarch64-02-1253d3d3-2575442-616e59c01c601.tfrecord*. 2024-04-25 06:09:39.825051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025380.708553 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:40.835050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:41.835231: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025381.924487 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:09:42.845058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025383.342565 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:09:43.847274: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:44.855048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025385.027423 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:09:45.865054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025386.116380 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:46.875128: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025387.245528 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:09:47.875294: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025388.363805 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:09:48.875484: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025389.526118 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:09:49.903697: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025390.735670 3129132 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0]: 0/1 streams completed; 4320/5000 splits assigned or completed. I0000 00:00:1714025390.735764 3129132 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3]: 0/1 streams completed; 4593/5000 splits assigned or completed. I0000 00:00:1714025390.736126 3127617 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1]: 0/1 streams completed; 4286/5000 splits assigned or completed. I0000 00:00:1714025390.736179 3127617 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2]: 0/1 streams completed; 4617/5000 splits assigned or completed. 2024-04-25 06:09:50.905045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025391.155062 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:09:51.925046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025392.393898 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:09:52.926492: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025393.415203 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:09:53.935064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025394.793768 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:09:54.935260: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:55.935450: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025396.035078 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:09:56.935607: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025397.922758 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__805ea45e4335153_ldcg-aarch64-02-1253d3d3-2575442-616e59c01c601.tfrecord*. 2024-04-25 06:09:57.936822: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025398.925075 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:09:58.955046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:09:59.965045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025400.317145 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:10:00.965560: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025401.331560 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:10:01.965754: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025402.338531 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dd48aa3076581a7e_ldcg-aarch64-02-13398469-2575442-616e5a2086afd.tfrecord*. 2024-04-25 06:10:02.975036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025403.716151 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:10:03.975199: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025404.845915 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:10:04.976494: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:05.985043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025406.013849 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dd48aa3076581a7e_ldcg-aarch64-02-13398469-2575442-616e5a2086afd.tfrecord*. 2024-04-25 06:10:06.986604: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:07.986769: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025408.005569 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:10:08.986949: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025409.087858 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:10:09.995067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025410.775732 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dd48aa3076581a7e_ldcg-aarch64-02-13398469-2575442-616e5a2086afd.tfrecord*. 2024-04-25 06:10:11.005063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025411.865475 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__975a605d6e60b569_ldcg-aarch64-02-24da0549-2575442-616e59c10d03b.tfrecord*. 2024-04-25 06:10:12.005523: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025412.887558 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:10:13.005739: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025413.906276 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:10:14.006846: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:15.007018: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:16.007182: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025416.685032 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:10:17.025057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:18.035083: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:19.045044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025420.038672 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:10:20.045498: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:21.077952: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:22.095048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:23.105051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025423.285455 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9467e7446faa8ee_ldcg-aarch64-02-1253d3d3-2575442-616e5a2fab4dd.tfrecord*. 2024-04-25 06:10:24.115049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:25.125051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025425.395135 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dd48aa3076581a7e_ldcg-aarch64-02-13398469-2575442-616e5a2086afd.tfrecord*. 2024-04-25 06:10:26.127180: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:27.145035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025428.065698 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c314bc228fe96ff7_ldcg-aarch64-02-8fd05898-2575442-616e59c11669f.tfrecord*. 2024-04-25 06:10:28.150417: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:29.165066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:30.165363: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:31.175144: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025431.198813 2764435 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dd48aa3076581a7e_ldcg-aarch64-02-13398469-2575442-616e5a2086afd.tfrecord*. 2024-04-25 06:10:32.185027: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025432.225024 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:10:33.185498: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025434.086895 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:10:34.185682: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025435.168449 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:10:35.185878: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:36.205076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025436.766827 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:10:37.215049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:38.225271: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:39.235037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025439.395030 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:10:40.245057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:41.245230: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:42.265043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025442.335350 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:10:43.265205: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025444.025540 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:10:44.285053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:45.285216: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025445.425061 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:10:46.295074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025446.494542 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:10:47.295233: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025448.090518 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:10:48.295674: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025449.136908 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:10:49.296495: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:50.315062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025450.745613 3234967 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1]: 0/1 streams completed; 4520/5000 splits assigned or completed. I0000 00:00:1714025450.745701 3234967 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2]: 0/1 streams completed; 4780/5000 splits assigned or completed. I0000 00:00:1714025450.814350 3232766 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0]: 0/1 streams completed; 4547/5000 splits assigned or completed. I0000 00:00:1714025450.814465 3232766 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3]: 0/1 streams completed; 4881/5000 splits assigned or completed. 2024-04-25 06:10:51.325045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025451.765041 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:10:52.326663: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025453.265734 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__465373b204515895_ldcg-aarch64-02-24da0549-2575442-616e5a5520c24.tfrecord*. 2024-04-25 06:10:53.335044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:54.345058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025454.600513 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:10:55.365202: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025455.781890 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ed6e02b542a677da_ldcg-aarch64-02-8fd05898-2575442-616e5a4d05a0a.tfrecord*. 2024-04-25 06:10:56.375041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:10:57.415185: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025457.585037 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:10:58.425093: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025458.747289 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ed6e02b542a677da_ldcg-aarch64-02-8fd05898-2575442-616e5a4d05a0a.tfrecord*. 2024-04-25 06:10:59.445045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025459.857942 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__465373b204515895_ldcg-aarch64-02-24da0549-2575442-616e5a5520c24.tfrecord*. 2024-04-25 06:11:00.455314: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:01.465048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025462.228589 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:11:02.465206: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:03.475076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025464.046717 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ed6e02b542a677da_ldcg-aarch64-02-8fd05898-2575442-616e5a4d05a0a.tfrecord*. 2024-04-25 06:11:04.485046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:05.485208: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025465.776806 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:11:06.485371: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:07.485528: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025467.494998 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:11:08.495041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025468.706506 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:11:09.505060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:10.510746: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:11.515043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025471.696389 2764434 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ed6e02b542a677da_ldcg-aarch64-02-8fd05898-2575442-616e5a4d05a0a.tfrecord*. 2024-04-25 06:11:12.516632: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:13.517143: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025474.035035 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9467e7446faa8ee_ldcg-aarch64-02-1253d3d3-2575442-616e5a2fab4dd.tfrecord*. 2024-04-25 06:11:14.525036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025475.133853 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__465373b204515895_ldcg-aarch64-02-24da0549-2575442-616e5a5520c24.tfrecord*. 2024-04-25 06:11:15.525864: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:16.526653: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025476.675032 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__74c628218b6f9770_ldcg-aarch64-02-a6d49c0e-2575442-616e59c121188.tfrecord*. 2024-04-25 06:11:16.913801: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-04-25 06:11:16.914319: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 459us 2024-04-25 06:11:16.915097: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3/streams/stream_0/checkpoints 2024-04-25 06:11:16.915449: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3, stream: 0, compression: SNAPPY } I0000 00:00:1714025476.957060 3288823 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_3 I0000 00:00:1714025477.066097 3289124 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1714025477.325281 3289124 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9, created stream_0 and assigned to localhost:34683 2024-04-25 06:11:17.379009: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9, stream: 0, compression: SNAPPY } 2024-04-25 06:11:17.379532: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9, stream 0, chunk 0. 2024-04-25 06:11:17.545052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025477.818255 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9467e7446faa8ee_ldcg-aarch64-02-1253d3d3-2575442-616e5a2fab4dd.tfrecord*. 2024-04-25 06:11:18.555058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:19.575192: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025479.815123 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:11:20.585049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025480.928664 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:11:21.625061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025481.966077 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__465373b204515895_ldcg-aarch64-02-24da0549-2575442-616e5a5520c24.tfrecord*. 2024-04-25 06:11:22.635044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:23.645118: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025484.276987 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:11:24.645326: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:25.685063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025485.878987 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:11:26.685810: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:27.705070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025488.206316 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9467e7446faa8ee_ldcg-aarch64-02-1253d3d3-2575442-616e5a2fab4dd.tfrecord*. 2024-04-25 06:11:28.715057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:29.725047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025489.945514 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:11:30.735055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025491.445174 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:11:31.742088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025492.648091 2764273 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b9467e7446faa8ee_ldcg-aarch64-02-1253d3d3-2575442-616e5a2fab4dd.tfrecord*. 2024-04-25 06:11:32.745040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025493.660956 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:11:33.745397: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:34.755051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:35.765072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025496.185522 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__7cf0b49d6c68a234_ldcg-aarch64-02-4dc271a-2575442-616e59c110414.tfrecord*. 2024-04-25 06:11:36.775054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025497.209104 2764271 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__465373b204515895_ldcg-aarch64-02-24da0549-2575442-616e5a5520c24.tfrecord*. 2024-04-25 06:11:37.785048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025498.326607 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:11:38.785348: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025499.445599 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__721c13f84f44ee73_ldcg-aarch64-02-a6d49c0e-2575442-616e5a8fccf81.tfrecord*. 2024-04-25 06:11:39.795047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:40.795976: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025501.205066 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:11:41.815042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025502.495095 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a5926cb7ab8fe2db_ldcg-aarch64-02-9c85aac4-2575442-616e59c1c6370.tfrecord*. 2024-04-25 06:11:42.815267: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025503.729438 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__721c13f84f44ee73_ldcg-aarch64-02-a6d49c0e-2575442-616e5a8fccf81.tfrecord*. 2024-04-25 06:11:43.825051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:44.835053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025504.975075 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a934831b7242a71d_ldcg-aarch64-02-9c85aac4-2575442-616e5a93652be.tfrecord*. 2024-04-25 06:11:45.845049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025506.065769 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__721c13f84f44ee73_ldcg-aarch64-02-a6d49c0e-2575442-616e5a8fccf81.tfrecord*. 2024-04-25 06:11:46.855159: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025507.075175 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__470cfd86cb7043f9_ldcg-aarch64-02-4dc271a-2575442-616e5a92405ea.tfrecord*. 2024-04-25 06:11:47.856450: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:48.060549: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-04-25 06:11:48.061045: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 437us 2024-04-25 06:11:48.061805: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2/streams/stream_0/checkpoints 2024-04-25 06:11:48.062098: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2, stream: 0, compression: SNAPPY } I0000 00:00:1714025508.156899 3340119 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_2 I0000 00:00:1714025508.165650 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a934831b7242a71d_ldcg-aarch64-02-9c85aac4-2575442-616e5a93652be.tfrecord*. I0000 00:00:1714025508.275328 3340133 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1714025508.276040 3340133 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5, created stream_0 and assigned to localhost:40545 2024-04-25 06:11:48.290057: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5, stream: 0, compression: SNAPPY } 2024-04-25 06:11:48.290566: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5, stream 0, chunk 0. 2024-04-25 06:11:48.875049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:49.885046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025510.595080 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a934831b7242a71d_ldcg-aarch64-02-9c85aac4-2575442-616e5a93652be.tfrecord*. I0000 00:00:1714025510.855672 3344460 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1]: 0/1 streams completed; 4695/5000 splits assigned or completed. I0000 00:00:1714025510.855740 3344795 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0]: 0/1 streams completed; 4766/5000 splits assigned or completed. 2024-04-25 06:11:50.895046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025511.615369 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:11:51.895544: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025512.637249 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__dbeccd8d30948983_ldcg-aarch64-02-1dd91bf4-2575442-616e59c1c4c81.tfrecord*. 2024-04-25 06:11:52.896663: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025513.777322 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:11:53.905061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:54.915048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025515.281618 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__470cfd86cb7043f9_ldcg-aarch64-02-4dc271a-2575442-616e5a92405ea.tfrecord*. 2024-04-25 06:11:55.915489: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025516.725903 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:11:56.925062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:11:57.925760: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025517.926079 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:11:58.926758: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025519.637403 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__470cfd86cb7043f9_ldcg-aarch64-02-4dc271a-2575442-616e5a92405ea.tfrecord*. 2024-04-25 06:11:59.933395: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025520.685648 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:12:00.935046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025521.829777 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:12:01.965052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:02.975054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025523.433541 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:12:03.986925: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025524.827827 2762707 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a934831b7242a71d_ldcg-aarch64-02-9c85aac4-2575442-616e5a93652be.tfrecord*. 2024-04-25 06:12:05.005061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:06.009097: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025526.065527 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__721c13f84f44ee73_ldcg-aarch64-02-a6d49c0e-2575442-616e5a8fccf81.tfrecord*. 2024-04-25 06:12:07.009277: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:08.025129: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:09.028996: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025529.188499 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:12:10.045053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:11.065154: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025531.535035 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:12:12.068536: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:13.085043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:14.091998: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025534.547284 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__470cfd86cb7043f9_ldcg-aarch64-02-4dc271a-2575442-616e5a92405ea.tfrecord*. 2024-04-25 06:12:15.092283: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:16.115047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025536.611701 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a23d6fea69767a5e_ldcg-aarch64-02-1dd91bf4-2575442-616e5aa3be59b.tfrecord*. I0000 00:00:1714025537.074463 3385277 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 182/5000 splits assigned or completed. 2024-04-25 06:12:17.155872: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:18.165043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025538.285574 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:12:19.165245: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:20.185052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025540.315035 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__721c13f84f44ee73_ldcg-aarch64-02-a6d49c0e-2575442-616e5a8fccf81.tfrecord*. 2024-04-25 06:12:21.265050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025541.592375 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__721c13f84f44ee73_ldcg-aarch64-02-a6d49c0e-2575442-616e5a8fccf81.tfrecord*. 2024-04-25 06:12:22.285047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:23.298059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:24.315176: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025544.866458 2762648 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__a23d6fea69767a5e_ldcg-aarch64-02-1dd91bf4-2575442-616e5aa3be59b.tfrecord*. 2024-04-25 06:12:25.335078: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025546.202901 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:12:26.345047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025547.308157 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:12:27.345573: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:28.365075: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025548.445096 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__721c13f84f44ee73_ldcg-aarch64-02-a6d49c0e-2575442-616e5a8fccf81.tfrecord*. 2024-04-25 06:12:29.371387: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025549.455039 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:12:30.385038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:31.385935: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:32.395050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025552.698120 2762708 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__721c13f84f44ee73_ldcg-aarch64-02-a6d49c0e-2575442-616e5a8fccf81.tfrecord*. 2024-04-25 06:12:33.415066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:34.425054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:35.435088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025555.606179 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:12:36.455075: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:37.465129: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:38.475042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025558.898272 2762646 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__470cfd86cb7043f9_ldcg-aarch64-02-4dc271a-2575442-616e5a92405ea.tfrecord*. 2024-04-25 06:12:39.495102: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025560.175466 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:12:40.495390: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:41.515053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025561.999359 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:12:42.525057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:42.577235: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 5000, chunk size: 68.3594KB. 2024-04-25 06:12:42.577739: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/checkpoints/checkpoint_6_5000. Checkpointing distributed tf.data snapshot writer took 451us 2024-04-25 06:12:43.535040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:43.585448: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0/streams/stream_0/checkpoints 2024-04-25 06:12:43.585874: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0, stream: 0, compression: SNAPPY } I0000 00:00:1714025563.703959 3429512 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_0 I0000 00:00:1714025563.815327 3429512 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1714025563.816212 3429512 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8, created stream_0 and assigned to localhost:34683 2024-04-25 06:12:43.856189: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8, stream: 0, compression: SNAPPY } 2024-04-25 06:12:43.856778: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8, stream 0, chunk 0. I0000 00:00:1714025564.408881 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:12:44.545059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:45.545261: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025565.905125 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:12:46.555042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:47.565047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025568.275393 3435556 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 203/5000 splits assigned or completed. 2024-04-25 06:12:48.575061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025568.845108 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:12:49.595052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:49.599963: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 4893, chunk size: 66.8965KB. I0000 00:00:1714025569.939157 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:12:50.595383: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025570.862497 3440663 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1]: 0/1 streams completed; 4916/5000 splits assigned or completed. 2024-04-25 06:12:51.605412: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025572.055468 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:12:52.605597: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:53.605846: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:54.636125: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025575.053403 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:12:55.645053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:56.031089: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/checkpoints/checkpoint_6_4893. Checkpointing distributed tf.data snapshot writer took 6.431054s 2024-04-25 06:12:56.031694: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1, stream 0, chunk 6. I0000 00:00:1714025576.056950 3447161 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/uncommitted_chunks/chunk_6_CHUNK_SHARDS___shard__ad2256d8b381c2e_ldcg-aarch64-02-e447a2d6-2575442-616e5ad98c7b8.tfrecord*. 2024-04-25 06:12:56.665084: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:57.675053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:12:58.675382: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025578.888224 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:12:59.695068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025580.145572 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:13:00.705051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025581.148073 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:13:01.725056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025582.155967 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:13:02.735052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:03.735219: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025584.476505 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:13:04.745043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:05.750944: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025586.246058 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:13:06.755048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:06.776305: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1, stream: 0, compression: SNAPPY }. Stream 0, chunk 6, number of elements in chunk: 107, chunk size: 1.46289KB. 2024-04-25 06:13:06.776766: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/checkpoints/checkpoint_8_107. Checkpointing distributed tf.data snapshot writer took 403us 2024-04-25 06:13:06.777281: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1/streams/stream_0/checkpoints 2024-04-25 06:13:06.777545: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1, stream: 0, compression: SNAPPY } I0000 00:00:1714025586.869144 3462579 snapshot_manager.cc:543] Finished writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_1 I0000 00:00:1714025586.985698 3462672 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/0 streams completed; 0/5000 splits assigned or completed. I0000 00:00:1714025586.986522 3462672 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7, created stream_0 and assigned to localhost:40545 2024-04-25 06:13:07.055556: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7, stream: 0, compression: SNAPPY } 2024-04-25 06:13:07.056086: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7, stream 0, chunk 0. I0000 00:00:1714025587.295613 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:13:07.775025: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:08.785039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025589.766989 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:13:09.805070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:10.813261: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025591.408128 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:13:11.813450: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:12.815037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:13.825059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025594.167764 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:13:14.845032: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025595.825286 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:13:15.855042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:16.865048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025597.175680 3478548 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 526/5000 splits assigned or completed. 2024-04-25 06:13:17.866011: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025598.321038 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:13:18.866591: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025599.617120 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:13:19.875048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:20.885088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025601.765567 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:13:21.895043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:22.895212: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025603.297050 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:13:23.895385: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025604.297530 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:13:24.905063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025605.306746 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:13:25.916365: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:26.935049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025607.676278 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:13:27.945043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:28.955050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025609.015716 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:13:29.955276: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025610.065041 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:13:30.955428: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025611.067948 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:13:31.969607: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025612.348107 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:13:32.975045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:33.985050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025614.846432 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:13:34.988729: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:35.999358: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025616.739067 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:13:37.007749: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025617.926482 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:13:38.010439: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025618.937308 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:13:39.025050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:40.045042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025620.695126 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:13:41.055435: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:42.065048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025622.769858 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:13:43.085056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025623.865511 3523000 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/1 streams completed; 338/5000 splits assigned or completed. 2024-04-25 06:13:44.095041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025624.407182 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:13:45.113493: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025626.005021 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:13:46.115040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:47.155051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025627.510682 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:13:48.165043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025628.295848 3530319 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 480/5000 splits assigned or completed. 2024-04-25 06:13:49.175075: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025629.547872 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:13:50.175245: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025630.795594 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:13:51.198412: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025631.956715 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:13:52.198740: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:53.205061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025633.667641 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:13:54.205280: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:55.205484: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:56.206318: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025636.341166 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:13:57.215049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:13:58.225061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025639.103333 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:13:59.245093: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025640.105017 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:14:00.255065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:01.275059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025641.387360 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:14:02.285091: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:03.295044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025643.395282 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:14:04.305050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:05.325054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025646.205912 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:14:06.325890: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025647.077639 3551779 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/1 streams completed; 411/5000 splits assigned or completed. I0000 00:00:1714025647.306859 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:14:07.336921: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:08.345054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:09.355059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:10.365237: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:11.375058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025651.457140 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:14:12.376606: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:13.385057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025653.973372 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:14:14.385401: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025654.985048 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:14:15.435063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:16.445107: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025657.225725 3559680 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 866/5000 splits assigned or completed. 2024-04-25 06:14:17.445538: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025657.726499 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:14:18.455061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:19.465136: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025659.619553 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:14:20.475038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025661.279452 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:14:21.495046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025662.418859 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:14:22.515052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025663.508911 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:14:23.515279: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025664.509552 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:14:24.525048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:25.535046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025665.627770 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:14:26.545055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025667.085637 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:14:27.575048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025668.179540 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:14:28.585050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:29.605053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025670.148118 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:14:30.615055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:31.625066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:32.645065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025672.896012 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:14:33.655307: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025674.275764 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:14:34.659994: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025675.387625 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:14:35.675078: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025676.546066 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:14:36.685048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025677.546368 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:14:37.685246: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025678.606010 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:14:38.695047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:39.705051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:40.710685: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025681.287482 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:14:41.715093: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025682.427941 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:14:42.725053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:43.725290: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025683.886296 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. I0000 00:00:1714025683.913961 3585959 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/1 streams completed; 1044/5000 splits assigned or completed. 2024-04-25 06:14:44.735046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025685.647607 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:14:45.755049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:46.775151: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025686.897831 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:14:47.784151: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025688.126178 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. I0000 00:00:1714025688.305003 3590352 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 1134/5000 splits assigned or completed. 2024-04-25 06:14:48.785051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:49.795045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:50.805045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025690.899826 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:14:51.815047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025692.122746 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:14:52.825110: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025693.125103 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:14:53.835099: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:54.845115: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025694.883241 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:14:55.855074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:56.865045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025697.226191 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:14:57.875094: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:14:58.895266: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025699.408747 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:14:59.900503: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:00.904086: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025701.378107 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:15:01.905048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:02.915062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025703.150329 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:15:03.925234: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025704.415024 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:15:04.935364: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025705.525150 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:15:05.935571: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025706.525457 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:15:06.945156: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025707.155840 3602905 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/1 streams completed; 1179/5000 splits assigned or completed. I0000 00:00:1714025707.688224 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:15:07.955044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:08.965042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025709.257410 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:15:09.975051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:10.985045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025711.145518 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:15:11.985679: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025712.496065 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:15:13.005066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:14.015056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025714.046717 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:15:15.055073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025715.246327 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:15:16.065060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025716.259886 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:15:17.085044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025717.255270 3612028 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 1551/5000 splits assigned or completed. I0000 00:00:1714025717.403197 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:15:18.085258: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025718.437174 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:15:19.095105: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025719.786834 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:15:20.105079: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025721.053664 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:15:21.115065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:22.125042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025722.511142 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:15:23.125259: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:24.145045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025724.702120 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:15:25.165057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:26.175052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:27.175599: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025727.456575 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:15:28.185041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025728.548125 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:15:29.194563: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025729.755092 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:15:30.195041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025730.802442 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:15:31.215051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:32.215882: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025732.315068 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:15:33.225085: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025733.561290 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:15:34.235048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025734.951888 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:15:35.235384: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:36.255065: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:37.255267: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025737.287680 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:15:38.265040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025739.109474 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:15:39.275040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025740.201971 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:15:40.295153: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:41.305041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025741.819467 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:15:42.315034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:43.335059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025743.995250 3641518 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/1 streams completed; 1635/5000 splits assigned or completed. 2024-04-25 06:15:44.355044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025744.757225 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:15:45.375048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:46.389748: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:47.395038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025747.732872 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. I0000 00:00:1714025748.305596 3646498 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 1754/5000 splits assigned or completed. 2024-04-25 06:15:48.405049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:49.405251: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025749.853059 3289684 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d88fd5d379a3f7eb_ldcg-aarch64-02-ff7c2006-2575442-616e5a7b716e5.tfrecord*. 2024-04-25 06:15:50.415073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:51.435043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025751.845079 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:15:52.442908: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:53.445042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025753.695545 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:15:54.455034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025754.726625 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:15:55.465025: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:56.471541: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025756.986004 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:15:57.475055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:15:58.485046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025759.436337 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. 2024-04-25 06:15:59.495047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:00.505044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:01.513177: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025761.865024 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:16:02.525043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025762.905484 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:16:03.535039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025764.040291 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:16:04.545048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025765.496364 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:16:05.549948: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:06.565056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025767.153838 3289685 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fd6fbe2644148824_ldcg-aarch64-02-4a2c7337-2575442-616e5a7b745ad.tfrecord*. I0000 00:00:1714025767.205714 3660567 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/1 streams completed; 1785/5000 splits assigned or completed. 2024-04-25 06:16:07.575069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:08.585049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:09.595048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025770.328121 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:16:10.615374: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:11.615735: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025772.215446 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:16:12.625055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:13.635075: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025774.595959 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:16:14.645041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:15.665041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025775.890233 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:16:16.675042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025777.046579 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. I0000 00:00:1714025777.306315 3666687 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 2021/5000 splits assigned or completed. 2024-04-25 06:16:17.646005: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2022, chunk size: 27.6445KB. 2024-04-25 06:16:17.685064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025778.265022 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:16:18.685249: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025779.506584 3340634 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8a3338970987e1cf_ldcg-aarch64-02-19053597-2575442-616e5a98f99f8.tfrecord*. 2024-04-25 06:16:19.685436: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:20.687501: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025781.338207 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:16:21.695052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025782.455873 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:16:22.455877: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/checkpoints/checkpoint_2_2022. Checkpointing distributed tf.data snapshot writer took 4.809802s 2024-04-25 06:16:22.461319: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9, stream 0, chunk 2. 2024-04-25 06:16:22.705043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025783.481085 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:16:23.715053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:24.725060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025785.576818 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:16:25.735070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:26.735260: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:27.737379: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025788.356306 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:16:28.745048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025789.456860 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:16:29.755052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025790.555561 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:16:30.765076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:31.765376: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025792.765403 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:16:32.795051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:33.805056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025794.205304 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:16:34.845039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:35.865059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025796.575846 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:16:36.875050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:37.885050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025797.925628 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:16:38.895043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025799.499529 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:16:39.915049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025800.655673 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:16:40.925039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:41.938150: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025802.716270 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:16:42.945053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:43.955092: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025804.055581 3688282 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/1 streams completed; 1973/5000 splits assigned or completed. 2024-04-25 06:16:44.965066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025805.047447 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:16:45.965752: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025806.365600 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:16:46.975048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025807.441369 3340636 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__753c136d44f407a1_ldcg-aarch64-02-24da0549-2575442-616e5a98f1fee.tfrecord*. 2024-04-25 06:16:47.975260: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025808.355448 3691284 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 2063/5000 splits assigned or completed. 2024-04-25 06:16:48.975439: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025809.435532 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:16:49.596908: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2064, chunk size: 28.2188KB. 2024-04-25 06:16:49.981363: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025810.908624 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:16:50.981579: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:51.984738: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:52.984929: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025813.544792 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:16:53.560370: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/checkpoints/checkpoint_2_2064. Checkpointing distributed tf.data snapshot writer took 3.963395s 2024-04-25 06:16:53.560674: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5, stream 0, chunk 2. 2024-04-25 06:16:53.985112: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025814.929807 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:16:54.985286: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:55.995040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:16:56.995412: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025817.079132 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:16:57.997088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025818.855067 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:16:59.005056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:00.005348: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:01.025047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:02.035052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025822.545020 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:17:03.045118: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:04.055061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:05.067678: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025825.681660 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:17:06.075047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:07.085051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025827.205939 3716289 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/1 streams completed; 1999/5000 splits assigned or completed. 2024-04-25 06:17:08.095047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:08.095149: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:40545 due to timeout 2024-04-25 06:17:08.095169: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:34683 due to timeout I0000 00:00:1714025828.939255 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:17:09.105047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:10.125063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:11.135036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:12.155035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025832.801044 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:17:13.175053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:14.195037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025835.134670 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:17:15.195656: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:16.195863: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025837.076234 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:17:17.205045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025837.415658 3747836 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 2357/5000 splits assigned or completed. I0000 00:00:1714025838.200066 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:17:18.215055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:19.225040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025839.265057 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:17:20.225249: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025840.467078 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:17:21.235056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025841.656491 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:17:22.255081: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025842.985032 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:17:23.265039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:24.275050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025845.021702 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:17:25.279336: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:26.295057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025846.542569 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:17:27.305038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:28.315084: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:29.345049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025849.846046 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:17:30.355048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:31.365237: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025851.515958 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:17:32.375048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025852.829595 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:17:33.375855: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:34.395051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:35.415049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:36.415403: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025856.647872 3431315 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__d2c9325751ecafca_ldcg-aarch64-02-2f1d5cc0-2575442-616e5acdebfe3.tfrecord*. 2024-04-25 06:17:37.425043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025858.038163 3431316 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__b464a5a5a1be11f0_ldcg-aarch64-02-4d3c7768-2575442-616e5acdee762.tfrecord*. 2024-04-25 06:17:38.435044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025859.082478 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:17:39.445050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:40.455058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:41.465050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:42.475082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025862.970096 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:17:43.475918: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025864.145460 3792749 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/1 streams completed; 2347/5000 splits assigned or completed. I0000 00:00:1714025864.195292 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:17:44.226391: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2348, chunk size: 32.1016KB. 2024-04-25 06:17:44.485048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:45.495049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025866.270984 3462979 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3cfdc3b0d7e31a12_ldcg-aarch64-02-6c92762f-2575442-616e5ae40e801.tfrecord*. 2024-04-25 06:17:46.495446: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:47.515054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025868.485793 3796884 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 2512/5000 splits assigned or completed. 2024-04-25 06:17:48.525061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:49.525451: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025870.227216 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:17:50.525624: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:51.531931: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025872.496884 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:17:52.585036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025873.512918 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8033a49bd6630fbd_ldcg-aarch64-02-9c85aac4-2575442-616e5ae413616.tfrecord*. 2024-04-25 06:17:53.556080: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/checkpoints/checkpoint_4_2348. Checkpointing distributed tf.data snapshot writer took 9.329615s 2024-04-25 06:17:53.556654: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8, stream 0, chunk 4. 2024-04-25 06:17:53.585204: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:54.595045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025875.218546 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:17:55.615057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:56.619349: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:17:57.625042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025878.105652 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:17:58.635220: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025879.215250 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:17:59.655062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025880.246780 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:18:00.665047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:01.669996: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:02.670174: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:03.695063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025884.225359 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6c6bcce2173bdf8f_ldcg-aarch64-02-9c85aac4-2575442-616e5bfba7109.tfrecord*. 2024-04-25 06:18:04.695367: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:05.705060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:06.709264: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025887.306257 3810965 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/1 streams completed; 2354/5000 splits assigned or completed. I0000 00:00:1714025887.608623 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:18:07.715057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:08.725054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:09.728457: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:10.745055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025890.795025 3462980 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6c6bcce2173bdf8f_ldcg-aarch64-02-9c85aac4-2575442-616e5bfba7109.tfrecord*. 2024-04-25 06:18:10.795828: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7, stream: 0, compression: SNAPPY }. Stream 0, chunk 0, number of elements in chunk: 2355, chunk size: 32.1973KB. 2024-04-25 06:18:11.755050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025892.005036 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:18:12.775056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:13.785068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:14.795048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:15.805058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025896.705195 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:18:16.815036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025897.455505 3814439 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 2713/5000 splits assigned or completed. 2024-04-25 06:18:17.825041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025898.375479 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:18:18.825320: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:19.826063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:20.826229: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:21.835043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:22.845055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:23.855040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:24.856029: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025904.866612 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:18:25.865035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:26.885050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025907.505950 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:18:27.887130: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:28.823014: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/checkpoints/checkpoint_4_2355. Checkpointing distributed tf.data snapshot writer took 18.02713s 2024-04-25 06:18:28.823459: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7, stream 0, chunk 4. I0000 00:00:1714025908.835241 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:18:28.895043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:29.902823: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:30.915029: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025911.796271 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:18:31.915213: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:32.925046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025913.435568 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:18:33.935085: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:34.935983: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:35.945056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:36.955054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025917.724844 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:18:37.965076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:38.975052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:39.975769: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025920.976158 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:18:40.985086: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:42.005046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025922.605899 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:18:43.015042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:44.025135: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025924.215878 3823180 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/1 streams completed; 2516/5000 splits assigned or completed. 2024-04-25 06:18:45.025428: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:46.038336: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025926.486854 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:18:47.045069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025927.935642 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:18:48.055050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025928.494989 3826710 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 2647/5000 splits assigned or completed. 2024-04-25 06:18:49.064732: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:50.065049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:51.075052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025931.186279 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:18:52.085046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025932.187848 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:18:53.125057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025934.136411 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:18:54.155042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:55.165050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025935.235135 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:18:56.165386: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:18:57.176649: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025937.565571 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:18:58.185040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025938.876142 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:18:59.186326: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:00.195041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:01.205041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025942.139203 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:19:02.215034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025943.157958 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:19:03.235040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:04.242027: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:05.242191: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:06.242350: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025946.645032 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:19:07.245052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025947.395337 3837197 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/1 streams completed; 2519/5000 splits assigned or completed. I0000 00:00:1714025948.218667 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:19:08.255044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:09.255213: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:10.265119: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025950.465051 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:19:11.285038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:12.295040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025953.239187 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:19:13.295425: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:14.305060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025954.735797 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:19:15.315242: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025955.915084 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:19:16.355040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025957.158608 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:19:17.365037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025957.525456 3846484 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 3399/5000 splits assigned or completed. 2024-04-25 06:19:18.365516: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025958.517872 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:19:19.365684: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:20.385047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025960.457316 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:19:21.385232: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:22.395046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:23.405053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025963.578171 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:19:24.415050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:25.435057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025966.237413 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:19:26.445055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:27.455058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025967.570779 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:19:28.455254: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025968.685116 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:19:29.465051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:30.465225: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025970.956793 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:19:31.485042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025972.065448 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:19:32.505105: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025973.073697 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:19:33.515044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:34.525061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025975.115635 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:19:35.535053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025976.317520 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:19:36.545043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:37.555045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:38.595039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:39.605045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025980.096538 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:19:40.615053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:41.615271: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025981.626332 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:19:42.625041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025982.885148 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:19:43.635053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025984.306797 3863440 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/1 streams completed; 2857/5000 splits assigned or completed. 2024-04-25 06:19:44.655051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025985.566209 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:19:45.675053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025986.586333 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:19:46.695047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:47.705050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025988.585452 3866057 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 3130/5000 splits assigned or completed. 2024-04-25 06:19:48.705246: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:49.716234: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025990.188956 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:19:50.735044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025991.715048 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:19:51.745037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:52.755182: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:53.765101: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:54.775056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025994.907158 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:19:55.785143: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025996.678078 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:19:56.805045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:57.805226: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:58.805484: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025999.199855 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:19:59.825041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:00.835047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:01.845047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:02.865047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026003.696051 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:20:03.875064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:04.885042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:05.895037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026005.895040 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:20:06.905049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026007.415451 3873379 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/1 streams completed; 2885/5000 splits assigned or completed. 2024-04-25 06:20:07.923348: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026008.298676 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:20:08.935044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026009.686914 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:20:09.935326: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:10.955039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:11.965296: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:12.965492: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026013.661298 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:20:13.965905: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026014.715033 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:20:14.985051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:16.005090: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:17.007956: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026017.266897 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. I0000 00:00:1714026017.545205 3881929 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 3721/5000 splits assigned or completed. 2024-04-25 06:20:18.035109: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026018.378070 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:20:19.065054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:20.085063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:21.095048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:22.105055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026022.776044 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:20:23.115041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:24.125051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026024.289042 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:20:25.132248: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026025.705588 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:20:26.135040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:27.140650: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:28.145043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:29.155038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026030.158159 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:20:30.175055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:31.175880: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:32.176769: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:33.185047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026033.466539 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:20:34.195046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026034.667691 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:20:35.215048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:36.215445: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026036.647887 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:20:37.235040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026037.794892 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:20:38.238050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:39.245082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:40.255045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026040.447365 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:20:41.265095: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:42.275047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026042.887000 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:20:43.285050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:44.295048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026044.383278 3898850 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/1 streams completed; 3034/5000 splits assigned or completed. I0000 00:00:1714026045.157054 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:20:45.305120: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026046.187911 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:20:46.308799: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:47.308970: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026047.426292 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:20:48.310183: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026048.635753 3901902 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 3409/5000 splits assigned or completed. I0000 00:00:1714026049.256512 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:20:49.310339: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:50.315038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026050.998761 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:20:51.325061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026052.165385 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:20:52.335044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:53.344362: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026054.118753 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:20:54.345101: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026055.270525 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:20:55.355042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:56.355238: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026056.694816 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:20:57.370128: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026058.167274 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:20:58.372674: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:59.385055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026059.385045 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:21:00.405092: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026060.514089 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:21:01.415066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:02.425055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:03.435050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026063.916839 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:21:04.455061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026064.929078 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:21:05.465043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:06.475044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026066.865453 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__d7b0b462759db236_ldcg-aarch64-02-74f04e97-2575442-616e5b9e65209.tfrecord*. 2024-04-25 06:21:07.485043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026067.511918 3922954 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/1 streams completed; 3355/5000 splits assigned or completed. I0000 00:00:1714026067.907994 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:21:08.485478: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:09.495046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:10.505044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026070.936714 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:21:11.505671: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026071.975822 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:21:12.511527: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026073.297479 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:21:13.512126: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026074.325949 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:21:14.525070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026075.397967 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:21:15.535052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:16.536653: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026076.986090 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:21:17.545047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026077.636634 3937767 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 4236/5000 splits assigned or completed. I0000 00:00:1714026078.129425 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:21:18.546075: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:19.555040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:20.556570: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026080.627514 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__96c7b6bff2744dc_ldcg-aarch64-02-43fd43ba-2575442-616e5b9e65211.tfrecord*. 2024-04-25 06:21:21.556778: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:22.556951: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026082.560921 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:21:23.565044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026084.207003 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:21:24.575042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:25.585045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026086.173152 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:21:26.605043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026087.173609 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:21:27.625061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:28.635055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:29.645043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:30.648471: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:31.655044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026092.366376 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:21:32.665044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026093.606610 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:21:33.675044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:34.705043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:35.735049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:36.755208: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026096.966932 3803196 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__da23e95557f4242d_ldcg-aarch64-02-2f1d5cc0-2575442-616e5bf5446ec.tfrecord*. 2024-04-25 06:21:37.755425: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026098.065464 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:21:38.765039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:39.775037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026100.315621 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:21:40.785046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026101.597635 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:21:41.795042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:42.815053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026103.056170 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:21:43.835056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026104.465587 3962021 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8]: 0/1 streams completed; 3867/5000 splits assigned or completed. 2024-04-25 06:21:44.845047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026105.677759 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:21:45.855046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:46.865041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026107.426050 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__328ba080fc398673_ldcg-aarch64-02-43fd43ba-2575442-616e5cc104fd3.tfrecord*. 2024-04-25 06:21:47.885054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026108.703649 3964450 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5]: 0/1 streams completed; 4090/5000 splits assigned or completed. 2024-04-25 06:21:48.925064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:49.935059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026110.568963 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b04703da207ef778_ldcg-aarch64-02-74f04e97-2575442-616e5cc0f4398.tfrecord*. 2024-04-25 06:21:50.965053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:51.975042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:52.975916: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026113.095983 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__328ba080fc398673_ldcg-aarch64-02-43fd43ba-2575442-616e5cc104fd3.tfrecord*. 2024-04-25 06:21:53.976089: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:54.985049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026114.986402 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:21:55.995048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026116.187469 3671703 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__328ba080fc398673_ldcg-aarch64-02-43fd43ba-2575442-616e5cc104fd3.tfrecord*. 2024-04-25 06:21:56.995356: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026117.495065 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b04703da207ef778_ldcg-aarch64-02-74f04e97-2575442-616e5cc0f4398.tfrecord*. 2024-04-25 06:21:58.015097: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026118.675951 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:21:59.035045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:00.055042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:01.055375: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026121.475124 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:22:02.055575: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:03.065050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026123.736260 3671704 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__b04703da207ef778_ldcg-aarch64-02-74f04e97-2575442-616e5cc0f4398.tfrecord*. 2024-04-25 06:22:04.075045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:05.085042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026125.795032 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:22:06.095072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026126.815016 3818521 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__26f59616f6c4027c_ldcg-aarch64-02-19053597-2575442-616e5c16e9376.tfrecord*. 2024-04-25 06:22:07.105045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026127.566029 3974450 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7]: 0/1 streams completed; 3809/5000 splits assigned or completed. 2024-04-25 06:22:08.105908: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026128.586812 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:22:09.115045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026130.015194 3803197 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_8/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__70279806ca99550d_ldcg-aarch64-02-e447a2d6-2575442-616e5bf544992.tfrecord*. 2024-04-25 06:22:10.115492: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:11.125058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:12.165053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026132.566161 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:22:13.205057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:14.295058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026134.656767 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:22:15.335052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026135.808627 3693810 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__e049359aba353990_ldcg-aarch64-02-d7e49ab6-2575442-616e5bbc0de6e.tfrecord*. 2024-04-25 06:22:16.345124: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026136.857876 3818520 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_7/streams/stream_0/uncommitted_chunks/chunk_4_CHUNK_SHARDS___shard__804b8062c0bf2c89_ldcg-aarch64-02-8ccc820e-2575442-616e5c16e92ed.tfrecord*. 2024-04-25 06:22:17.355088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026137.715972 3987195 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_9]: 0/1 streams completed; 4824/5000 splits assigned or completed. I0000 00:00:1714026138.018017 3693809 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/668832b5c288693bc5b6071fe73c1764j8hlwd3i/tmpne6td544/tmpn1mxfp2e/tf_data_snapshot_5/streams/stream_0/uncommitted_chunks/chunk_2_CHUNK_SHARDS___shard__28cde3578a7e01f1_ldcg-aarch64-02-88831d11-2575442-616e5bbc1058e.tfrecord*. 2024-04-25 06:22:18.355259: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:19.365052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:20.425052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration -- Test timed out at 2024-04-25 06:22:20 UTC -- Current thread 0x0000ffff8d0a7420 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 92 in wait_for_snapshot File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 323 in testWorkersDontExceedMaxStreamAssignments File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343 in execute_test_method File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360 in decorated File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/parameterized.py", line 314 in bound_param_test File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 579 in _callTestMethod File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 623 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 678 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/runner.py", line 217 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 274 in runTests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 102 in __init__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2537 in _run_and_get_tests_result File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2568 in run_tests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2156 in _run_in_app File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2049 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 51 in g_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/app.py", line 258 in _run_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/app.py", line 312 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 60 in main_wrapper File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/benchmark.py", line 489 in benchmarks_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 62 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/test.py", line 53 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 534 in ================================================================================ ==================== Test output for //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_ft_test (shard 12 of 17): 2024-04-25 06:19:16.916461: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.6: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/python_aarch64-unknown-linux-gnu/bin/python3 [ RUN ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_eager_tfapiversion_1_numsources_3_numworkers_3 [ SKIPPED ] SnapshotFtTest.testLargeMultiSourceSnapshotRecoversAndCompletes_test_mode_eager_tfapiversion_1_numsources_3_numworkers_3 [ RUN ] SnapshotFtTest.testMultipleDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numworkers_1 [ SKIPPED ] SnapshotFtTest.testMultipleDatasetRecoversAndCompletes_test_mode_graph_tfapiversion_1_numworkers_1 [ RUN ] SnapshotFtTest.testNonrepeatedDatasetDoesntProduceSecondRepetitionDir_test_mode_eager_tfapiversion_2_numsources_1_numworkers_5 2024-04-25 06:19:20.741321: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x/tf_data_dispatcher_journal 2024-04-25 06:19:20.741416: I tensorflow/core/data/service/dispatcher_impl.cc:243] No journal found. Starting dispatcher from new state. 2024-04-25 06:19:20.742182: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:19:20.742214: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38617 2024-04-25 06:19:20.755088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: INVALID_ARGUMENT: The current number of workers must be positive 2024-04-25 06:19:20.778243: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:19:20.778477: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41595 2024-04-25 06:19:20.786856: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:19:20.805367: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:33141 2024-04-25 06:19:20.819180: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:19:20.819466: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:38179 2024-04-25 06:19:20.821849: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:19:20.822047: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:38887 2024-04-25 06:19:20.824204: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:19:20.824391: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:38597 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1714025961.287407 3848975 snapshot_manager.cc:181] Starting to write tf.data snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot I0000 00:00:1714025961.557567 3848975 snapshot_manager.cc:192] Started writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot 2024-04-25 06:19:21.575261: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 41595 I0000 00:00:1714025961.749632 3849645 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, created stream_4 and assigned to localhost:33141 I0000 00:00:1714025961.766036 3849644 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, created stream_2 and assigned to localhost:38597 I0000 00:00:1714025961.767841 3849646 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, created stream_3 and assigned to localhost:38179 I0000 00:00:1714025961.765594 3849573 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, created stream_1 and assigned to localhost:38887 I0000 00:00:1714025962.093334 3848970 snapshot_manager.cc:687] For snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, created stream_0 and assigned to localhost:41595 2024-04-25 06:19:22.195684: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:19:22.245312: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 38617 2024-04-25 06:19:22.246125: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x/tf_data_dispatcher_journal 2024-04-25 06:19:22.329205: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 101us. 2024-04-25 06:19:22.346667: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 4, compression: SNAPPY } 2024-04-25 06:19:22.365535: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:19:22.365835: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 4, chunk 0. 2024-04-25 06:19:22.389763: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 1, chunk 0. 2024-04-25 06:19:22.429651: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:19:22.430000: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:19:22.430519: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 2, chunk 0. 2024-04-25 06:19:22.445674: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:19:22.465452: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 3, compression: SNAPPY } 2024-04-25 06:19:22.465962: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 3, chunk 0. 2024-04-25 06:19:22.505687: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:19:22.535245: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:19:22.535763: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 0, chunk 0. I0000 00:00:1714025962.826595 3850874 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot 2024-04-25 06:19:22.835099: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 38617 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:19:22.835265: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38617 2024-04-25 06:19:22.855076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:23.875041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:24.875203: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025965.455244 3851260 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__149ebb5929b8ac5b_ldcg-aarch64-02-72a77d4c-3844986-616e5c4a21393.tfrecord*. 2024-04-25 06:19:25.475166: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:19:25.876346: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:26.885232: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025967.313889 3851199 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__76eae550680c51d5_ldcg-aarch64-02-20ef6a9-3844986-616e5c4a10e10.tfrecord*. 2024-04-25 06:19:27.895042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025968.677181 3851184 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ed1b16e3322d236c_ldcg-aarch64-02-2f5d1261-3844986-616e5c49fc37d.tfrecord*. 2024-04-25 06:19:28.789800: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: port: 41595 protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:19:28.790019: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:41595 2024-04-25 06:19:28.805223: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 0, compression: SNAPPY } 2024-04-25 06:19:28.805812: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 0, chunk 0. 2024-04-25 06:19:28.905038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:29.905396: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:29.905453: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:33141 due to timeout 2024-04-25 06:19:30.925046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:31.935036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025972.056281 3851199 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__76eae550680c51d5_ldcg-aarch64-02-20ef6a9-3844986-616e5c4a10e10.tfrecord*. 2024-04-25 06:19:32.945040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025973.085042 3851069 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__6b801e2d4cf298de_ldcg-aarch64-02-f4cfcbf4-3844986-616e5c49faec5.tfrecord*. 2024-04-25 06:19:33.085839: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:19:33.156504: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 33141 2024-04-25 06:19:33.922576: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 128ms. 2024-04-25 06:19:34.015592: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 106ms. 2024-04-25 06:19:34.056099: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 116ms. 2024-04-25 06:19:34.056385: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 130ms. 2024-04-25 06:19:34.125796: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 132ms. 2024-04-25 06:19:34.165652: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 166ms. 2024-04-25 06:19:34.185729: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 174ms. 2024-04-25 06:19:34.205719: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 152ms. 2024-04-25 06:19:34.275632: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 222ms. 2024-04-25 06:19:34.355813: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 219ms. 2024-04-25 06:19:34.361333: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 191ms. 2024-04-25 06:19:34.362038: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 286ms. 2024-04-25 06:19:34.505644: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 309ms. 2024-04-25 06:19:34.553901: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 223ms. 2024-04-25 06:19:34.575777: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 275ms. 2024-04-25 06:19:34.675647: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 385ms. 2024-04-25 06:19:34.785666: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 358ms. 2024-04-25 06:19:34.815407: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 267ms. 2024-04-25 06:19:34.851838: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 402ms. 2024-04-25 06:19:35.063397: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 412ms. 2024-04-25 06:19:35.085776: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 429ms. 2024-04-25 06:19:35.155745: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 533ms. 2024-04-25 06:19:35.266273: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 400ms. 2024-04-25 06:19:35.496672: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 495ms. 2024-04-25 06:19:35.525684: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 583ms. 2024-04-25 06:19:35.675693: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 595ms. 2024-04-25 06:19:35.695733: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 650ms. 2024-04-25 06:19:36.005740: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 583ms. 2024-04-25 06:19:36.115748: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 575ms. 2024-04-25 06:19:36.275792: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 763ms. 2024-04-25 06:19:36.355790: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down.. Will retry in 675ms. 2024-04-25 06:19:36.416670: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:19:36.417141: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:19:36.417393: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:19:36.425718: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: CANCELLED: Failed to perform worker heartbeat: Cancelled 2024-04-25 06:19:36.426152: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 38617 2024-04-25 06:19:36.426882: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x/tf_data_dispatcher_journal 2024-04-25 06:19:36.427220: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 210us. 2024-04-25 06:19:36.595592: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: Socket closed. Will retry in 937ms. 2024-04-25 06:19:36.705541: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: Socket closed. Will retry in 430ms. 2024-04-25 06:19:37.032381: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: Socket closed. Will retry in 924ms. 2024-04-25 06:19:37.045572: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: Socket closed. Will retry in 1177ms. I0000 00:00:1714025988.897056 3859561 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot 2024-04-25 06:19:48.915084: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 38617 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:19:48.915187: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38617 2024-04-25 06:19:48.925079: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:49.029720: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: port: 33141 protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:19:49.029913: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:33141 2024-04-25 06:19:49.145076: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 4, compression: SNAPPY } 2024-04-25 06:19:49.691501: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 4, chunk 0. 2024-04-25 06:19:49.925252: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:19:49.925317: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38179 due to timeout 2024-04-25 06:19:50.955144: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714025991.665787 3851184 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__ed1b16e3322d236c_ldcg-aarch64-02-2f5d1261-3844986-616e5c49fc37d.tfrecord*. 2024-04-25 06:19:51.667307: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. 2024-04-25 06:19:51.755138: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 38179 I0000 00:00:1714025999.115030 3856218 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_0/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__474395319a3f2247_ldcg-aarch64-02-cb1c4d85-3844986-616e5c5030446.tfrecord*. I0000 00:00:1714026003.735520 3851192 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__8196e2ee1617e5bd_ldcg-aarch64-02-aa7860b-3844986-616e5c4a2e3cd.tfrecord*. I0000 00:00:1714026005.931474 3867288 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__310ee786ac8d8903_ldcg-aarch64-02-b299d201-3844986-616e5c640b460.tfrecord*. I0000 00:00:1714026008.756021 3867287 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35a85f0b89d105dc_ldcg-aarch64-02-70d6d26c-3844986-616e5c640663c.tfrecord*. 2024-04-25 06:20:08.785736: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down. 2024-04-25 06:20:08.786126: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 38617 2024-04-25 06:20:08.786871: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x/tf_data_dispatcher_journal 2024-04-25 06:20:08.787256: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 237us. 2024-04-25 06:20:08.795091: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: Failed to get snapshot split: tf.data prefetched split provider is shut down. 2024-04-25 06:20:08.845563: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:08.845928: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:08.846226: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:08.846516: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714026009.155980 3875167 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot 2024-04-25 06:20:09.175083: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 38617 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:20:09.175173: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38617 2024-04-25 06:20:09.175274: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:09.213536: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 3, compression: SNAPPY } 2024-04-25 06:20:09.214107: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 3, chunk 0. 2024-04-25 06:20:09.236151: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: port: 38179 protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:20:09.236504: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:38179 2024-04-25 06:20:09.355245: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 135ms. 2024-04-25 06:20:09.375230: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 117ms. 2024-04-25 06:20:09.495252: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 122ms. 2024-04-25 06:20:09.503297: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 176ms. 2024-04-25 06:20:09.625230: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 161ms. 2024-04-25 06:20:09.685206: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 187ms. 2024-04-25 06:20:09.756532: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:124] tf.data service snapshot writer is cancelled: CANCELLED: The tf.data service snapshot writer has been cancelled. I0000 00:00:1714026009.765132 3875582 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2ba9f56cf065f1fb_ldcg-aarch64-02-361bfddc-3844986-616e5c76a3e1b.tfrecord*. 2024-04-25 06:20:10.195040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:10.195115: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38887 due to timeout 2024-04-25 06:20:10.609503: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 38887 I0000 00:00:1714026013.666222 3867288 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__310ee786ac8d8903_ldcg-aarch64-02-b299d201-3844986-616e5c640b460.tfrecord*. 2024-04-25 06:20:13.690111: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:13.690515: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:13.690958: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 38617 2024-04-25 06:20:13.691750: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x/tf_data_dispatcher_journal 2024-04-25 06:20:13.692144: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 258us. 2024-04-25 06:20:13.745623: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:13.746009: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed I0000 00:00:1714026013.885673 3879426 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot 2024-04-25 06:20:13.905111: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 38617 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:20:13.905207: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38617 2024-04-25 06:20:13.925057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:13.949811: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: port: 38887 protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:20:13.950041: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:38887 2024-04-25 06:20:13.965337: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:20:13.965828: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 1, chunk 0. 2024-04-25 06:20:14.075267: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 107ms. 2024-04-25 06:20:14.095292: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 143ms. 2024-04-25 06:20:14.096421: I tensorflow/core/data/service/server_lib.cc:94] Shut down WorkerServer server running at port 38597 2024-04-25 06:20:14.185249: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 136ms. 2024-04-25 06:20:14.245230: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 132ms. 2024-04-25 06:20:14.325207: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 147ms. 2024-04-25 06:20:14.385175: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 239ms. 2024-04-25 06:20:14.485251: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 180ms. 2024-04-25 06:20:14.625211: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 240ms. 2024-04-25 06:20:14.685216: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: failed to connect to all addresses. Will retry in 255ms. I0000 00:00:1714026014.750276 3879839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1ea8e769c9752537_ldcg-aarch64-02-65571d34-3844986-616e5c7b2e202.tfrecord*. 2024-04-25 06:20:14.775984: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:14.776269: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:14.795075: I tensorflow/core/data/service/server_lib.cc:94] Shut down DispatchServer server running at port 38617 2024-04-25 06:20:14.795873: I tensorflow/core/data/service/dispatcher_impl.cc:236] Attempting to restore dispatcher state from journal in /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x/tf_data_dispatcher_journal 2024-04-25 06:20:14.796303: I tensorflow/core/data/service/dispatcher_impl.cc:253] Restored from journal in 300us. 2024-04-25 06:20:14.825588: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:14.875767: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: GOAWAY received. Will retry in 415ms. 2024-04-25 06:20:14.885660: W tensorflow/core/data/service/worker_impl.cc:590] Failed to send heartbeat to dispatcher: UNAVAILABLE: Failed to perform worker heartbeat: Socket closed 2024-04-25 06:20:14.945740: I tensorflow/core/data/service/grpc_util.cc:84] Failed to Get next split for snapshot: UNAVAILABLE: Failed to get snapshot split: GOAWAY received. Will retry in 347ms. I0000 00:00:1714026015.165896 3880194 snapshot_manager.cc:271] Resumed writing tf.data distributed snapshot at /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot 2024-04-25 06:20:15.175131: I tensorflow/core/data/service/dispatcher_impl.cc:272] Started tf.data service dispatcher with config port: 38617 protocol: "grpc" work_dir: "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmp4b1sxh4x" fault_tolerant_mode: true job_gc_check_interval_ms: 1000 job_gc_timeout_ms: 300000 client_timeout_ms: 300000 worker_timeout_ms: 200 worker_max_concurrent_snapshots: 3 2024-04-25 06:20:15.175268: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data DispatchServer running at 0.0.0.0:38617 2024-04-25 06:20:15.175830: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:15.230667: I tensorflow/core/data/service/worker_impl.cc:189] Worker registered with dispatcher running at localhost:38617. Worker config: port: 38597 protocol: "grpc" dispatcher_address: "localhost:38617" worker_address: "localhost:%port%" heartbeat_interval_ms: 100 dispatcher_timeout_ms: 5000 data_transfer_address: "localhost:%port%" snapshot_max_chunk_size_bytes: 16384 2024-04-25 06:20:15.230868: I tensorflow/core/data/service/server_lib.cc:82] Started tf.data WorkerServer running at 0.0.0.0:38597 2024-04-25 06:20:15.285468: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:120] Writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:20:15.286083: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:172] Writing distributed tf.data snapshot /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream 2, chunk 0. 2024-04-25 06:20:16.185054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:17.204130: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026017.356013 3879840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c2c0e0ee7aae9452_ldcg-aarch64-02-48e63615-3844986-616e5c7b35751.tfrecord*. 2024-04-25 06:20:18.205036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026018.405617 3875583 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fc701d095d87dee5_ldcg-aarch64-02-8a5e18ac-3844986-616e5c76a67c7.tfrecord*. 2024-04-25 06:20:19.215031: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:20.215190: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:21.225071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:22.245047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026022.767022 3867287 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35a85f0b89d105dc_ldcg-aarch64-02-70d6d26c-3844986-616e5c640663c.tfrecord*. 2024-04-25 06:20:23.245340: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026023.945526 3875582 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2ba9f56cf065f1fb_ldcg-aarch64-02-361bfddc-3844986-616e5c76a3e1b.tfrecord*. 2024-04-25 06:20:24.255039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:25.265054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:26.275047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026026.297571 3879839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1ea8e769c9752537_ldcg-aarch64-02-65571d34-3844986-616e5c7b2e202.tfrecord*. 2024-04-25 06:20:27.276107: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:28.285040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:29.295043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026030.185534 3875583 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__fc701d095d87dee5_ldcg-aarch64-02-8a5e18ac-3844986-616e5c76a67c7.tfrecord*. 2024-04-25 06:20:30.305047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:31.318067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:32.325041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:33.325229: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026033.635091 3875582 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2ba9f56cf065f1fb_ldcg-aarch64-02-361bfddc-3844986-616e5c76a3e1b.tfrecord*. 2024-04-25 06:20:34.335064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:35.345081: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026035.536427 3867287 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35a85f0b89d105dc_ldcg-aarch64-02-70d6d26c-3844986-616e5c640663c.tfrecord*. 2024-04-25 06:20:36.345253: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:37.345417: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026037.800857 3867288 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__310ee786ac8d8903_ldcg-aarch64-02-b299d201-3844986-616e5c640b460.tfrecord*. 2024-04-25 06:20:38.346092: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:39.346465: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:40.365038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:41.375074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:42.385049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026042.876007 3867288 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__310ee786ac8d8903_ldcg-aarch64-02-b299d201-3844986-616e5c640b460.tfrecord*. 2024-04-25 06:20:43.395042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:44.405049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026045.206660 3879839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1ea8e769c9752537_ldcg-aarch64-02-65571d34-3844986-616e5c7b2e202.tfrecord*. 2024-04-25 06:20:45.415043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026046.208139 3879840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c2c0e0ee7aae9452_ldcg-aarch64-02-48e63615-3844986-616e5c7b35751.tfrecord*. 2024-04-25 06:20:46.425047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:47.425244: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:48.425750: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026049.417973 3880687 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3beaf8f864350f14_ldcg-aarch64-02-8faed150-3844986-616e5c7c706b3.tfrecord*. 2024-04-25 06:20:49.426878: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:50.431579: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026051.061865 3880687 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3beaf8f864350f14_ldcg-aarch64-02-8faed150-3844986-616e5c7c706b3.tfrecord*. 2024-04-25 06:20:51.431752: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026052.164465 3879840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c2c0e0ee7aae9452_ldcg-aarch64-02-48e63615-3844986-616e5c7b35751.tfrecord*. 2024-04-25 06:20:52.435199: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:53.445110: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026054.120782 3879839 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__1ea8e769c9752537_ldcg-aarch64-02-65571d34-3844986-616e5c7b2e202.tfrecord*. 2024-04-25 06:20:54.465039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026055.276831 3875582 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2ba9f56cf065f1fb_ldcg-aarch64-02-361bfddc-3844986-616e5c76a3e1b.tfrecord*. 2024-04-25 06:20:55.475046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:20:56.475212: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026056.693520 3880687 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3beaf8f864350f14_ldcg-aarch64-02-8faed150-3844986-616e5c7c706b3.tfrecord*. 2024-04-25 06:20:57.475393: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026058.166899 3867287 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__35a85f0b89d105dc_ldcg-aarch64-02-70d6d26c-3844986-616e5c640663c.tfrecord*. 2024-04-25 06:20:58.485043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026059.395750 3880687 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3beaf8f864350f14_ldcg-aarch64-02-8faed150-3844986-616e5c7c706b3.tfrecord*. 2024-04-25 06:20:59.495040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026060.415036 3875582 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__2ba9f56cf065f1fb_ldcg-aarch64-02-361bfddc-3844986-616e5c76a3e1b.tfrecord*. 2024-04-25 06:21:00.505050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:01.515052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:02.535056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:03.545063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026064.361279 3879840 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__c2c0e0ee7aae9452_ldcg-aarch64-02-48e63615-3844986-616e5c7b35751.tfrecord*. 2024-04-25 06:21:04.555044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:05.555358: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026065.765515 3867288 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__310ee786ac8d8903_ldcg-aarch64-02-b299d201-3844986-616e5c640b460.tfrecord*. 2024-04-25 06:21:06.565509: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:07.595037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026067.708293 3880687 parallel_tfrecord_writer.cc:167] Writing TFRecord of 14B to file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_2/uncommitted_chunks/chunk_0_CHUNK_SHARDS___shard__3beaf8f864350f14_ldcg-aarch64-02-8faed150-3844986-616e5c7c706b3.tfrecord*. 2024-04-25 06:21:07.966692: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 4, compression: SNAPPY }. Stream 4, chunk 0, number of elements in chunk: 310, chunk size: 4.23828KB. 2024-04-25 06:21:07.967279: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/checkpoints/checkpoint_2_310. Checkpointing distributed tf.data snapshot writer took 536us 2024-04-25 06:21:07.967661: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_4/checkpoints 2024-04-25 06:21:07.967943: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 4, compression: SNAPPY } 2024-04-25 06:21:07.969338: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 2, compression: SNAPPY }. Stream 2, chunk 0, number of elements in chunk: 176, chunk size: 2.40625KB. 2024-04-25 06:21:07.969756: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_2/checkpoints/checkpoint_2_176. Checkpointing distributed tf.data snapshot writer took 388us 2024-04-25 06:21:07.970108: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_2/checkpoints 2024-04-25 06:21:07.970386: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 2, compression: SNAPPY } 2024-04-25 06:21:07.976235: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 3, compression: SNAPPY }. Stream 3, chunk 0, number of elements in chunk: 302, chunk size: 4.12891KB. 2024-04-25 06:21:07.976750: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/checkpoints/checkpoint_2_302. Checkpointing distributed tf.data snapshot writer took 459us 2024-04-25 06:21:07.977122: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_3/checkpoints 2024-04-25 06:21:07.977394: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 3, compression: SNAPPY } 2024-04-25 06:21:07.980414: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:288] Checkpointing distributed tf.data snapshot writer for snapshot SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 1, compression: SNAPPY }. Stream 1, chunk 0, number of elements in chunk: 202, chunk size: 2.76172KB. 2024-04-25 06:21:07.980844: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:306] Wrote checkpoint file /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/checkpoints/checkpoint_2_202. Checkpointing distributed tf.data snapshot writer took 398us 2024-04-25 06:21:07.981206: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:343] Deleting tf.data snapshot checkpoints directory: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot/streams/stream_1/checkpoints 2024-04-25 06:21:07.981499: I tensorflow/core/data/service/snapshot/snapshot_stream_writer.cc:135] Finished writing distributed tf.data snapshot stream: SnapshotWriterParams { base_path: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot, stream: 1, compression: SNAPPY } 2024-04-25 06:21:08.605063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:09.605383: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:10.615037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:11.625042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:12.635047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:13.645078: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:14.665042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026074.855547 3931099 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:21:15.685089: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:16.685336: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:17.705055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:18.715143: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:19.745040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:20.745460: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:21.746149: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:22.755040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:23.765044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:24.785034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:25.795047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:26.805044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:27.815042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:28.825051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:29.835079: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:30.845040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:31.845245: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:32.859905: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:33.885041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:34.895043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:35.905039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:36.915044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:37.917134: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:38.925050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:39.945047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:40.955049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:41.965066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:42.975054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:43.995045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:45.005050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:46.015058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:47.025045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:48.035060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:49.045049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:50.055131: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:51.061371: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:52.061535: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:53.065050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:54.085081: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:55.095052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:56.105044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:57.105243: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:58.115045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:21:59.125042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:00.125467: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:01.145040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:02.165048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:03.185038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:04.205082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:05.215035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:06.235063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:07.245044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:08.279039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:09.305052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:10.325046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:11.375050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:12.385046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:13.389730: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:14.389897: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026134.875617 3950541 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:22:15.395056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:16.406851: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:17.425064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:18.435074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:19.445047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:20.455060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:21.461176: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:22.465045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:23.515048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:24.515563: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:25.521278: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:26.535061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:27.545045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:28.555380: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:29.565049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:30.585141: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:31.605291: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:32.615054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:33.625055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:34.635059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:35.665041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:36.675053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:37.775049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:38.795052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:39.845053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:40.855054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:41.874792: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:42.875380: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:43.925047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:44.935040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:45.945059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:46.947661: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:47.950233: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:48.950390: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:49.965038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:50.975050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:52.045180: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:53.045716: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:54.065161: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:55.085172: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:56.095049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:57.105050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:58.145049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:22:59.155048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:00.165094: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:01.184430: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:02.185042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:03.195045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:04.205044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:05.215052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:06.225046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:07.235044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:08.245067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:09.315052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:10.335042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:11.345064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:12.355040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:13.375041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:14.395040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026194.935754 3981895 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:23:15.405179: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:16.415049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:17.415917: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:18.425049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:19.425335: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:20.435083: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:21.445044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:22.455042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:23.465306: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:24.485048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:25.535046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:26.585053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:27.615044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:28.635042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:29.663714: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:30.675047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:31.715055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:32.725060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:33.732812: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:34.749082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:35.755055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:36.825048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:37.835060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:38.845154: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:39.845398: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:40.855052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:41.876271: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:42.885205: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:43.895070: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:44.898372: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:45.905053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:46.935056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:47.955045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:48.965191: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:49.965747: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:50.975042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:51.985042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:52.995052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:54.005035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:55.017437: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:56.017859: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:57.025046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:58.039083: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:23:59.045044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:00.055043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:01.065038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:02.105937: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:03.115093: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:04.125045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:05.135111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:06.155052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:07.165058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:08.175058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:09.195040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:10.205038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:11.215063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:12.235093: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:13.237149: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:14.265039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026254.966613 3981895 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:24:15.271987: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:16.295253: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:17.335037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:18.345046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:19.375047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:20.395060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:21.455039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:22.475054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:23.505501: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:24.513141: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:25.525174: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:26.533922: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:27.534080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:28.535034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:29.545038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:30.550892: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:31.552009: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:32.552173: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:33.552345: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:34.615057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:35.635044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:36.655046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:37.735037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:38.735263: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:39.745081: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:40.755035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:41.755657: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:42.760674: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:43.775033: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:44.776165: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:45.785130: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:46.785297: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:47.785470: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:48.786395: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:49.805323: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:50.805523: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:51.815103: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:52.825045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:53.835049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:54.845044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:55.853012: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:56.853209: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:57.858595: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:58.875052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:24:59.895052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:00.913315: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:01.913483: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:02.925104: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:03.965041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:04.967164: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:05.985042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:06.995043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:08.029025: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:09.045071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:10.047736: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:11.065039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:12.075126: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:13.085059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:14.115048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026315.035875 3981895 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:25:15.125035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:16.145041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:17.175231: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:18.175452: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:19.185054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:20.195053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:21.195240: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:22.205295: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:23.210253: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:24.210462: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:25.215468: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:26.225044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:27.238428: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:28.245053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:29.255048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:30.274909: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:31.275130: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:32.295047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:33.315064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:34.375045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:35.405105: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:36.415048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:37.424311: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:38.424522: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:39.435061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:40.445074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:41.455045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:42.465047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:43.515052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:44.516118: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:45.525045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:46.535049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:47.555057: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:48.555236: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:49.565161: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:50.575032: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:51.595032: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:52.595587: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:53.602241: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:54.605615: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:55.615046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:56.635121: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:57.644393: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:58.665088: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:25:59.665301: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:00.675037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:01.685051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:02.695046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:03.696032: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:04.745041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:05.755049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:06.775518: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:07.855045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:08.875048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:09.925039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:10.925241: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:11.925438: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:12.945053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:13.965043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:14.985060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026375.106922 144805 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:26:15.985305: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:16.995037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:18.015042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:19.025030: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:20.045046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:21.075394: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:22.095051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:23.155049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:24.185045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:25.195040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:26.205037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:27.225041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:28.225208: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:29.235042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:30.245043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:31.255047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:32.255236: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:33.255398: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:34.256618: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:35.335054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:36.425167: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:37.435046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:38.446252: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:39.455048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:40.525111: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:41.535041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:42.545064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:43.555029: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:44.565047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:45.575059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:46.587734: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:47.587904: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:48.595038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:49.615053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:50.625061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:51.645080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:52.645426: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:53.645614: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:54.661893: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:55.662071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:56.662265: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:57.665035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:58.675043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:26:59.685046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:00.695052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:01.735038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:02.755077: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:03.765040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:04.765455: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:05.785052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:06.795047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:07.815121: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:08.825051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:09.855042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:10.865041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:11.885051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:12.905044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:13.915040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:14.926252: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026435.179156 111440 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:27:15.935034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:16.955038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:17.965042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:18.975044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:19.985044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:20.995046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:22.005039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:23.025045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:24.035046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:25.037640: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:26.045037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:27.045746: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:28.055064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:29.075055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:30.075264: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:31.095024: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:32.095214: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:33.115026: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:34.135037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:35.145049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:36.155041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:37.185048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:38.195059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:39.195373: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:40.196497: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:41.215143: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:42.215324: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:43.225044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:44.235134: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:45.235333: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:46.237167: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:47.245047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:48.256731: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:49.258751: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:50.275042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:51.275267: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:52.276385: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:53.276902: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:54.295044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:55.325043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:56.345050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:57.355051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:58.365260: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:27:59.385102: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:00.415048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:01.435043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:02.435421: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:03.485188: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:04.495064: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:05.515024: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:06.515427: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:07.525028: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:08.545036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:09.547076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:10.547234: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:11.645025: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:12.675027: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:13.745036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:14.755025: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026495.245509 331789 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:28:15.775026: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:16.875045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:17.895046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:18.925048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:19.945044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:21.235048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:22.255019: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:23.255171: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:24.265053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:25.285059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:26.325045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:27.355042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:28.405082: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:29.415062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:30.435069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:31.435560: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:32.435777: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:33.439159: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:34.446027: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:35.455047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:36.455213: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:37.475023: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:38.475883: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:39.476456: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:40.476809: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:41.535049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:42.545114: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:43.605038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:44.615048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:45.625043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:46.625315: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:47.645042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:48.655044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:49.675043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:50.695041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:51.705042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:52.715056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:53.729454: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:54.729650: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:55.729835: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:56.730109: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:57.730392: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:58.730558: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:28:59.745041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:00.755071: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:01.765047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:02.775041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:03.783735: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:04.785074: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:05.795045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:06.805047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:07.815047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:08.815207: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:09.825042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:10.845055: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:11.865069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:12.873353: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:13.875033: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:14.925042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026555.285830 331789 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:29:15.945060: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:16.945949: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:18.005040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:19.015040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:20.025058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:21.025250: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:22.065081: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:23.095067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:23.095118: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38887 due to timeout 2024-04-25 06:29:23.095140: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:33141 due to timeout 2024-04-25 06:29:23.095152: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38179 due to timeout 2024-04-25 06:29:23.095162: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:41595 due to timeout 2024-04-25 06:29:23.095172: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38597 due to timeout 2024-04-25 06:29:24.105044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:25.115037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:26.135042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:27.145042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:28.165067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:29.179295: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:30.185022: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:31.205056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:32.215056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:33.216007: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:34.216170: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:35.216346: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:36.216522: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:37.225046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:38.255085: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:39.365047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:40.375069: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:41.385041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:42.385212: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:43.385405: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:44.415051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:45.435046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:46.435210: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:47.435377: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:48.455157: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:49.456269: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:50.456482: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:51.459072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:52.485110: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:53.485277: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:54.525083: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:55.535048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:56.565042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:57.635053: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:58.635239: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:29:59.645301: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:00.655038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:01.665051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:02.675049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:03.677936: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:04.690365: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:05.695048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:06.705047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:07.715045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:08.725166: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:09.725511: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:10.745026: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:11.745187: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:12.765044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:13.805093: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:14.835546: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026615.285875 437699 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:30:15.845047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:16.895162: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:17.945042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:18.985043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:20.015042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:21.025038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:22.025233: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:23.032491: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:24.038247: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:25.045058: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:26.065045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:27.065389: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:28.070274: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:29.070476: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:30.070630: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:31.095041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:32.105039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:33.105300: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:34.125047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:35.135080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:36.175042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:37.175249: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:38.185707: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:39.205061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:40.205218: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:41.212453: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:42.225045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:43.225598: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:44.235051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:45.235258: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:46.235435: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:47.245085: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:48.248959: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:49.255045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:50.258983: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:51.265041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:52.285043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:53.305059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:54.305388: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:55.325184: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:56.335032: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:57.345045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:58.355063: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:30:59.357992: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:00.365059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:01.525022: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:02.535043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:03.545130: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:04.549235: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:05.555035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:06.556766: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:07.565039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:08.565231: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:09.575039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:10.655045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:11.685059: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:12.699098: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:13.705043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:14.715046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026675.377600 475313 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:31:15.735048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:16.745148: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:17.755045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:18.765042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:19.895041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:20.915044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:21.925512: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:22.935045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:23.935890: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:24.955044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:25.956566: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:26.956730: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:27.956886: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:28.965077: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:29.975066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:30.975579: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:31.985048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:32.995081: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:33.996862: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:35.005045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:36.015042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:37.025062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:38.025260: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:39.035038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:40.035259: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:41.035467: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:42.055112: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:43.075038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:44.085083: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:45.085247: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:46.106080: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:46.106151: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38887 due to timeout 2024-04-25 06:31:46.106169: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38597 due to timeout 2024-04-25 06:31:46.106180: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38179 due to timeout 2024-04-25 06:31:46.106192: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:41595 due to timeout 2024-04-25 06:31:46.106202: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:33141 due to timeout 2024-04-25 06:31:47.125034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:48.138039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:49.147520: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:50.165041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:51.185040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:51.185109: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:33141 due to timeout 2024-04-25 06:31:51.185127: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38597 due to timeout 2024-04-25 06:31:51.185139: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:41595 due to timeout 2024-04-25 06:31:52.195039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:53.205039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:54.215043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:55.246981: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:56.247472: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:57.265281: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:58.275043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:31:59.285049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:00.286728: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:01.286918: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:02.295056: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:03.365050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:04.425040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:05.435033: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:06.445033: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:07.455039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:08.455271: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:09.465037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:10.485045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:11.505040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:12.525042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:13.585048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:14.595020: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026735.445875 617639 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:32:15.615050: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:16.655068: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:17.655323: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:18.665268: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:19.665429: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:20.675026: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:21.695026: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:22.705024: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:23.755151: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:24.785042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:25.785523: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:26.786076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:27.805048: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:28.825046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:29.835041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:30.855049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:31.865046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:32.865219: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:33.865867: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:34.866073: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:35.866236: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:36.875041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:37.935049: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:38.975054: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:39.975247: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:40.976378: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:41.976534: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:42.985076: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:43.985299: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:44.995161: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:45.999127: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:47.075043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:48.085040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:49.085408: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:50.105040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:51.205234: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:52.245052: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:53.317919: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:54.318105: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:55.325072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:56.325286: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:57.335067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:58.345045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:32:59.353566: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:00.355091: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:01.375028: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:02.395041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:03.415039: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:04.425037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:05.435035: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:06.435199: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:07.435370: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:08.435533: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:09.455021: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:10.455201: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:11.465118: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:12.466480: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:13.515043: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:14.525038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration I0000 00:00:1714026795.456935 792594 snapshot_manager.cc:648] tf.data snapshot progress [/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/1d024e883d837761a87a1a58a379aed3h7m7i2pl/tmpxaglcbr1/tmpkqw_kk19/tf_data_snapshot]: 4/5 streams completed; 1000/1000 splits assigned or completed. 2024-04-25 06:33:15.530727: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:16.575041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:17.585273: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:18.595044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:19.615042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:20.635044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:21.645036: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:22.665041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:23.665266: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:24.665773: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:25.675086: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:26.695062: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:27.698120: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:28.699271: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:29.699427: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:30.715034: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:31.725041: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:32.735038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:33.755180: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:33.755249: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38887 due to timeout 2024-04-25 06:33:33.755269: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:41595 due to timeout 2024-04-25 06:33:33.755281: I tensorflow/core/data/service/dispatcher_impl.cc:1493] Lost worker localhost:38179 due to timeout 2024-04-25 06:33:34.775042: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:35.795045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:36.805044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:37.815038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:38.825040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:39.825208: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:40.835040: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:41.855154: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:42.865061: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:43.875044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:44.885130: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:45.905067: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:46.905269: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:47.905441: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:48.915372: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:49.915541: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:50.915734: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:51.915914: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:52.925037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:53.925715: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:54.935044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:55.965044: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:56.965838: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:57.975256: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:33:59.015046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:00.015395: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:01.025038: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:02.025385: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:03.045037: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:04.045394: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:05.085045: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:06.085662: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:07.085861: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:08.135066: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:09.145078: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:10.165072: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:11.215047: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:12.365033: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:13.375051: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration 2024-04-25 06:34:14.385046: W tensorflow/core/data/service/dispatcher_impl.cc:1405] Error updating the optimal number of workers metric in tf.data service AutoScaler: UNAVAILABLE: Cannot update the optimal number of workers metric because there are no reported processing and target processing times for at least one iteration -- Test timed out at 2024-04-25 06:34:15 UTC -- Current thread 0x0000ffff83927420 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 92 in wait_for_snapshot File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 433 in testNonrepeatedDatasetDoesntProduceSecondRepetitionDir File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343 in execute_test_method File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360 in decorated File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/parameterized.py", line 314 in bound_param_test File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 579 in _callTestMethod File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 623 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 678 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/runner.py", line 217 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 274 in runTests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 102 in __init__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2537 in _run_and_get_tests_result File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2568 in run_tests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2156 in _run_in_app File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/testing/absltest.py", line 2049 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 51 in g_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/app.py", line 258 in _run_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/absl_py/absl/app.py", line 312 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 60 in main_wrapper File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/benchmark.py", line 489 in benchmarks_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 62 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/platform/test.py", line 53 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.runfiles/org_tensorflow/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test.py", line 534 in ================================================================================ ==================== Test output for //tensorflow/python/kernel_tests/linalg:matrix_triangular_solve_op_test_cpu (shard 1 of 3): 2024-04-25 06:27:52.256901: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.6: /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/python_aarch64-unknown-linux-gnu/bin/python3 [ RUN ] MatrixTriangularSolveOpTest.testEmpty INFO:tensorflow:time(__main__.MatrixTriangularSolveOpTest.testEmpty): 0.08s I0425 06:28:02.756749 281473183151136 test_util.py:2634] time(__main__.MatrixTriangularSolveOpTest.testEmpty): 0.08s [ OK ] MatrixTriangularSolveOpTest.testEmpty [ RUN ] MatrixTriangularSolveOpTest.testSolve WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. W0425 06:28:02.764001 281473183151136 deprecation.py:50] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/contextlib.py:105: TensorFlowTestCase.test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `self.session()` or `self.cached_session()` instead. 2024-04-25 06:28:02.769474: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled INFO:tensorflow:time(__main__.MatrixTriangularSolveOpTest.testSolve): 1.58s I0425 06:28:04.341924 281473183151136 test_util.py:2634] time(__main__.MatrixTriangularSolveOpTest.testSolve): 1.58s [ OK ] MatrixTriangularSolveOpTest.testSolve [ RUN ] MatrixTriangularSolveOpTest.testSolveBatchBroadcastLargerBatches -- Test timed out at 2024-04-25 06:42:49 UTC -- Current thread 0x0000ffff95187420 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/pypi_numpy/site-packages/numpy/linalg/linalg.py", line 400 in solve File "<__array_function__ internals>", line 180 in solve File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test.py", line 89 in _verifySolve File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test.py", line 31 in _verifySolveAllWays File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test.py", line 41 in _verifySolveAllWaysReal File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test.py", line 173 in testSolveBatchBroadcastLargerBatches File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py", line 1858 in decorated File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 579 in _callTestMethod File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 623 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/case.py", line 678 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 122 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/suite.py", line 84 in __call__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/runner.py", line 217 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 274 in runTests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/external/python_aarch64-unknown-linux-gnu/lib/python3.11/unittest/main.py", line 102 in __init__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/absl_py/absl/testing/absltest.py", line 2537 in _run_and_get_tests_result File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/absl_py/absl/testing/absltest.py", line 2568 in run_tests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/absl_py/absl/testing/absltest.py", line 2156 in _run_in_app File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/absl_py/absl/testing/absltest.py", line 2049 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 51 in g_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/absl_py/absl/app.py", line 258 in _run_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/absl_py/absl/app.py", line 312 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 60 in main_wrapper File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/benchmark.py", line 489 in benchmarks_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 62 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/test.py", line 53 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test.py", line 244 in ================================================================================ //tensorflow/c:c_api_experimental_test PASSED in 22.6s //tensorflow/c:c_api_function_test PASSED in 24.6s //tensorflow/c:c_api_test_cpu PASSED in 27.8s //tensorflow/c:c_test PASSED in 24.6s //tensorflow/c:env_test_cpu PASSED in 18.7s //tensorflow/c:kernels_test_cpu PASSED in 43.6s //tensorflow/c:ops_test PASSED in 18.9s //tensorflow/c:tf_status_helper_test PASSED in 0.1s //tensorflow/c:while_loop_test PASSED in 22.5s //tensorflow/c/eager:c_api_cluster_test_cpu PASSED in 23.1s //tensorflow/c/eager:c_api_remote_function_test_cpu PASSED in 23.5s //tensorflow/c/eager:c_api_remote_test_cpu PASSED in 23.8s //tensorflow/c/eager:c_api_test_cpu PASSED in 25.9s //tensorflow/c/eager:custom_device_test PASSED in 23.0s //tensorflow/c/eager:dlpack_test_cpu PASSED in 27.1s //tensorflow/c/eager/parallel_device:parallel_device_lib_test PASSED in 22.7s //tensorflow/c/eager/parallel_device:parallel_device_remote_test PASSED in 23.1s //tensorflow/c/eager/parallel_device:parallel_device_test PASSED in 23.3s //tensorflow/c/experimental/filesystem/plugins/gcs:expiring_lru_cache_test PASSED in 0.1s //tensorflow/c/experimental/filesystem/plugins/gcs:ram_file_block_cache_test PASSED in 2.2s //tensorflow/c/experimental/grappler:grappler_test PASSED in 20.8s //tensorflow/c/experimental/next_pluggable_device:tensor_pjrt_buffer_util_test PASSED in 6.5s //tensorflow/c/experimental/ops/gen/common:case_format_test PASSED in 0.4s //tensorflow/c/experimental/ops/gen/cpp:cpp_generator_test PASSED in 0.4s //tensorflow/c/experimental/ops/gen/cpp/renderers:renderer_test PASSED in 0.4s //tensorflow/c/experimental/saved_model/core:constant_loading_test PASSED in 10.1s //tensorflow/c/experimental/saved_model/core:object_graph_traversal_test PASSED in 9.5s //tensorflow/c/experimental/saved_model/core:saved_variable_loading_test PASSED in 13.3s //tensorflow/c/experimental/saved_model/core:signature_flattening_test PASSED in 9.4s //tensorflow/c/experimental/saved_model/core:tf_concrete_function_loading_test PASSED in 9.1s //tensorflow/c/experimental/saved_model/core/ops:restore_ops_test PASSED in 11.4s //tensorflow/c/experimental/saved_model/core/ops:variable_ops_test PASSED in 12.2s //tensorflow/c/experimental/saved_model/internal:saved_model_api_test PASSED in 23.3s //tensorflow/c/experimental/stream_executor:stream_executor_test PASSED in 0.1s //tensorflow/c/kernels:bitcast_op_test PASSED in 0.4s //tensorflow/c/kernels:summary_op_benchmark_test PASSED in 0.4s //tensorflow/c/kernels:summary_op_test PASSED in 0.4s //tensorflow/c/kernels:tensor_shape_utils_test PASSED in 0.1s //tensorflow/cc:cc_op_gen_test PASSED in 0.4s //tensorflow/cc:client_client_session_test PASSED in 1.9s //tensorflow/cc:coordinator_test PASSED in 3.8s //tensorflow/cc:framework_cc_ops_test PASSED in 2.1s //tensorflow/cc:framework_gradient_checker_test PASSED in 2.3s //tensorflow/cc:framework_gradients_test PASSED in 4.1s //tensorflow/cc:framework_scope_test PASSED in 0.4s //tensorflow/cc:framework_while_gradients_test PASSED in 2.4s //tensorflow/cc:gradients_array_grad_test PASSED in 4.6s //tensorflow/cc:gradients_data_flow_grad_test PASSED in 2.0s //tensorflow/cc:gradients_functional_grad_test PASSED in 2.0s //tensorflow/cc:gradients_image_grad_test PASSED in 5.4s //tensorflow/cc:gradients_linalg_grad_test PASSED in 2.2s //tensorflow/cc:gradients_manip_grad_test PASSED in 1.9s //tensorflow/cc:gradients_math_grad_test PASSED in 4.5s //tensorflow/cc:gradients_nn_grad_test PASSED in 3.3s //tensorflow/cc:gradients_resource_variable_grad_test PASSED in 2.1s //tensorflow/cc:ops_const_op_test PASSED in 0.4s //tensorflow/cc:ops_while_loop_test PASSED in 1.9s //tensorflow/cc:queue_runner_test PASSED in 12.0s //tensorflow/cc/experimental/base/tests:tensor_test PASSED in 0.1s //tensorflow/cc/experimental/base/tests:tensorhandle_test PASSED in 27.5s //tensorflow/cc/experimental/libexport:load_test PASSED in 0.1s //tensorflow/cc/experimental/libexport:save_test PASSED in 0.1s //tensorflow/cc/experimental/libtf:libtf_module_test PASSED in 23.0s //tensorflow/cc/experimental/libtf:libtf_object_test PASSED in 0.1s //tensorflow/cc/experimental/libtf:libtf_perf_test PASSED in 0.1s //tensorflow/cc/experimental/libtf:libtf_runtime_test PASSED in 24.7s //tensorflow/cc/experimental/libtf:libtf_transform_test PASSED in 24.2s //tensorflow/cc/experimental/libtf:libtf_value_test PASSED in 0.1s //tensorflow/cc/experimental/libtf:libtf_visit_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:iostream_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:none_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:scalars_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:string_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:tensor_spec_test PASSED in 0.1s //tensorflow/cc/saved_model:bundle_v2_test PASSED in 0.1s //tensorflow/cc/saved_model:fingerprinting_chunked_test PASSED in 0.1s //tensorflow/cc/saved_model:fingerprinting_test PASSED in 0.8s //tensorflow/cc/saved_model:fingerprinting_utils_test PASSED in 0.2s //tensorflow/cc/saved_model:metrics_test PASSED in 0.1s //tensorflow/cc/saved_model:reader_test PASSED in 0.1s //tensorflow/cc/saved_model:saved_model_bundle_lite_test PASSED in 4.9s //tensorflow/cc/saved_model:saved_model_bundle_test PASSED in 5.1s //tensorflow/cc/saved_model:util_test PASSED in 0.1s //tensorflow/cc/saved_model/experimental/tests:saved_model_api_test PASSED in 29.0s //tensorflow/cc/tools:freeze_saved_model_test PASSED in 2.0s //tensorflow/compiler/aot:codegen_test PASSED in 23.0s //tensorflow/compiler/jit:compilability_check_util_test PASSED in 18.2s //tensorflow/compiler/jit:deadness_analysis_test PASSED in 6.9s //tensorflow/compiler/jit:device_compilation_cache_test PASSED in 3.8s //tensorflow/compiler/jit:device_compilation_cluster_signature_test PASSED in 3.8s //tensorflow/compiler/jit:device_compilation_profiler_test PASSED in 19.5s //tensorflow/compiler/jit:device_compiler_client_test PASSED in 3.9s //tensorflow/compiler/jit:device_compiler_disable_test PASSED in 14.5s //tensorflow/compiler/jit:device_executable_persistor_test PASSED in 18.9s //tensorflow/compiler/jit:device_util_test PASSED in 3.8s //tensorflow/compiler/jit:encapsulate_util_test PASSED in 0.6s //tensorflow/compiler/jit:node_matchers_test PASSED in 0.4s //tensorflow/compiler/jit:resource_operation_safety_analysis_test PASSED in 6.7s //tensorflow/compiler/jit:shape_inference_test PASSED in 0.4s //tensorflow/compiler/jit:xla_activity_listener_test PASSED in 18.3s //tensorflow/compiler/jit:xla_cluster_util_test PASSED in 6.8s //tensorflow/compiler/jit:xla_compile_util_test PASSED in 4.0s //tensorflow/compiler/jit:xla_kernel_creator_test PASSED in 6.6s //tensorflow/compiler/jit:xla_launch_util_test PASSED in 18.8s //tensorflow/compiler/jit/tests:auto_clustering_test PASSED in 19.5s //tensorflow/compiler/mlir:mlir_graph_optimization_pass_test PASSED in 16.8s //tensorflow/compiler/mlir:register_common_dialects_test PASSED in 13.9s //tensorflow/compiler/mlir/lite:lstm_utils_test PASSED in 0.6s //tensorflow/compiler/mlir/lite:offset_buffer_test PASSED in 0.1s //tensorflow/compiler/mlir/lite:perception_ops_utils_test PASSED in 0.5s //tensorflow/compiler/mlir/lite:size_utils_test PASSED in 0.1s //tensorflow/compiler/mlir/lite:tftext_utils_test PASSED in 0.4s //tensorflow/compiler/mlir/lite/debug:debug_test PASSED in 0.5s //tensorflow/compiler/mlir/lite/experimental/remat:rematerializer_test PASSED in 0.8s //tensorflow/compiler/mlir/lite/experimental/tac:execution_metadata_exporter_test PASSED in 5.5s //tensorflow/compiler/mlir/lite/experimental/tac/tests:compute-cost.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:device-transform-gpu.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:device-transform-nnapi.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:fold-constants-to-subgraph.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/experimental/tac/tests:get-alternative-subgraph.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:get-op-cost.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:pick-subgraphs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:raise-target-subgraphs.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/experimental/tac/tests:tac-filter.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:target-annotation.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests/e2e:device-transform-nnapi.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests/e2e:simple-graph.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/metrics:error_collector_inst_test PASSED in 0.3s //tensorflow/compiler/mlir/lite/quantization:numerical_utils_test PASSED in 0.1s //tensorflow/compiler/mlir/lite/quantization/lite:quantize_model_test PASSED in 7.2s //tensorflow/compiler/mlir/lite/quantization/stablehlo:quantization_test PASSED in 13.1s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:fallback_to_flex_ops_default.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:fallback_to_flex_ops_legacy.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:tf_to_quant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:tf_to_quant_4bit.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/quantization/tests:import_quant_stats.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/sparsity:sparsify_model_test PASSED in 1.1s //tensorflow/compiler/mlir/lite/stablehlo/tests:call_xla_module_to_stablehlo.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:compose-uniform-quantized-type.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:composite-lowering.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:fold_broadcast.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:fuse_mhlo_convolution.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-inplaceupdate.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-skip-partitioned-calls.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-skip-quantization-ops.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-composite.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-vhlo.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-add.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-broadcast.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-clamp.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-concat.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-constant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-conv.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-max.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-mul.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-pad.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-reshape.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-rsqrt.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-sub.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize_hlo.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:odml-to-stablehlo-allow-tf.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:odml-to-stablehlo-smuggle-resize.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:optimize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:stablehlo-custom-call-legalize-composite.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-clamp.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-concat.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-conv.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-division.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-logistic.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-multiply.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-resize-bilinear.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-tf-quantize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tfl_legalize_hlo.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tfl_legalize_hlo_custom_call.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:unfold_splat_constant_pass.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:unfuse_mhlo_batch_norm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:uniform-quantized-stablehlo-to-tfl.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:analyze-variables.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:canonicalize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:const-fold.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:decompose-hybrid-quantization.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:default_quant_params.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:dilated-conv.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:fuse-tftext.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:get-arithmetic-count.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:guarantee_func_has_one_use.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:inlining.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:insert_call_once_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:legalize-tensorlist.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:legalize-tf-assert.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:legalize-tf-hashtables.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:legalize-tf-no-runtime-verification.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:legalize-tf-variables.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:legalize-tf-while.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:legalize-tf.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests:legalize_jax_random.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:lift_tflite_flex_ops.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list-default-to-single-batch.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list-enable-dynamic-update-slice.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:modify_io_nodes.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:ops.mlir.test PASSED in 4.9s //tensorflow/compiler/mlir/lite/tests:optimize-after-quantization.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:optimize.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/lite/tests:optimize_batch_matmul.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:optimize_functional_ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:optimize_no_verify.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:optimize_op_order.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:partitioned-topological-sort.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:pin-ops-with-side-effects.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:post-quantize-dynamic-range.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests:post-quantize.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:prepare-composite-functions-tf.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-dynamic-range.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-post-training-16bits.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-post-training.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-signed.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:prepare-quantize.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:prepare-tf-fake-quant-4bit.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:prepare-tf-fake-quant.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:prepare-tf-with-allowing-bf16-and-f16-type-legalization.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:prepare-tf.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:push-tpose-through-ewise.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests:quantize-dynamic-range-float16.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:quantize-dynamic-range.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/lite/tests:quantize-numeric-verify.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests:quantize-variables.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:quantize.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:raise-custom-ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:reduce-type-precision.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:reduce_while_operands.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:shape-inference.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:split-merged-operands.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:tfl_while_op_licm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:tfl_while_outline.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:trim-functions-tf.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:unfold-large-splat-constant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/debuginfo:v1_1.0_224_frozen.wrong_attr.line.part.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/debuginfo:v1_1.0_224_frozen.wrong_attr.stack.part.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/end2end:add.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:back2back_fake_quant.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/end2end:control_flow_v1.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:conv_2d.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:conv_2d_nchw.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:custom_opdef.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:disallow_stateful_partitioned_call.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_per_channel.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_per_channel_4bit.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_without_identity.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_without_identity_4bit.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/end2end:graph-input-node.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:graph_with_placeholder_with_default.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:if_op.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:quant_stats.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/end2end:unroll_batch_matmul.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/end2end:unroll_batch_matmul_disabled.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:basic_lstm.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:bucketize.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:cast_bf16.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:composite_op_round_trip.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:constants.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:constants_offset.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:control_edges.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:custom_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:custom_op_offset.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:dynamic_shape.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:empty_input_output_names.json.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:external_constant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:if_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:import_json.json.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:importer_test_min_max.cc.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:importer_test_min_max.cc.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:input_arrays.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:input_output_names_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:legacy_reshape.json.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:lstm.json.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:lstm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:many_attribute_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:math.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:matmul.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:mix_tflite_vhlo.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:multi_output_op.json.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:optional.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:optional_input.json.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:output_arrays.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:pruning.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:pruning_function_input_as_output.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:quant_stats.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:quantization.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:reshape.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:signature.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:signature_with_multiple_entry_points.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:simple.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:tf_variant_type.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:unranked_function_output.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:unranked_tensor.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:variable.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:vhlo.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:vhlo_const.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:vhlo_custom_call.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:while_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2exec:tfl_while_op.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:basic_lstm.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:bucketize.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:cast_bf16.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:custom_op_with_tflite_op.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:custom_tensorlist_reserve.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:deduplicate_const.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:depthwise_conv2d.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:depthwise_conv2d_v2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_builtin.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_custom.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_flex.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_flex_enable_builtin.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:dynamic_shape_constant.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fake_quant.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_exclusively.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_complex128.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_f64.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_tflite_op.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fully_connected.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fully_connected_v2.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:hashtable_resource.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:if_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:logical.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:low_bit_packing.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm_asym_attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm_quantized.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:math.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:metadata.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:mul_v2.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:mul_v3.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:nn.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:numeric_verify.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:optional.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:quantization.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:reshape.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_output_override.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_with_multiple_entry_points.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_with_no_inputs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple_with_connected_control_nodes.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple_with_unconnected_control_nodes.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:svdf.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:svdf_v2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:tf_entry_function.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:tfl_while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:transpose_conv_optional.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:type_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:u16_quant.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unidirectional_sequence_lstm.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unidirectional_sequence_rnn.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unranked_tensor.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unsorted_segment_prod.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variable.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variant_type_on_func.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variant_type_on_op.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:while_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/common:attrs_and_constraints_test PASSED in 6.1s //tensorflow/compiler/mlir/quantization/common:func_test PASSED in 5.9s //tensorflow/compiler/mlir/quantization/common:lift_as_function_call_test PASSED in 6.1s //tensorflow/compiler/mlir/quantization/common:uniform_quantized_types_test PASSED in 6.1s //tensorflow/compiler/mlir/quantization/common/python:testing_test PASSED in 37.8s //tensorflow/compiler/mlir/quantization/common/quantization_lib:quantization_driver_test PASSED in 6.0s //tensorflow/compiler/mlir/quantization/stablehlo:bfloat16_type_test PASSED in 18.4s //tensorflow/compiler/mlir/quantization/stablehlo:convert_tf_quant_to_mhlo_int_test PASSED in 12.7s //tensorflow/compiler/mlir/quantization/stablehlo:convert_tf_quant_types_test PASSED in 13.6s //tensorflow/compiler/mlir/quantization/stablehlo:math_utils_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/stablehlo:stablehlo_type_utils_test PASSED in 0.3s //tensorflow/compiler/mlir/quantization/stablehlo:tf_type_utils_test PASSED in 17.3s //tensorflow/compiler/mlir/quantization/stablehlo/cc:config_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/stablehlo/cc:graph_def_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/stablehlo/cc:io_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/stablehlo/cc:permutation_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/stablehlo/cc:pre_calibration_test PASSED in 11.4s //tensorflow/compiler/mlir/quantization/stablehlo/cc:report_test PASSED in 5.9s //tensorflow/compiler/mlir/quantization/stablehlo/cc:saved_model_export_test PASSED in 14.2s //tensorflow/compiler/mlir/quantization/stablehlo/cc:saved_model_import_test PASSED in 12.9s //tensorflow/compiler/mlir/quantization/stablehlo/cc/calibration:calibration_parameters_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/stablehlo/cc/calibration:representative_dataset_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/stablehlo/ops:stablehlo_op_quant_spec_test PASSED in 6.1s //tensorflow/compiler/mlir/quantization/stablehlo/tests:fill_quantization_options_test PASSED in 2.2s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:calibration_algorithm_test PASSED in 37.3s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:calibration_statistics_collector_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:calibration_statistics_saver_op_test PASSED in 0.4s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:custom_aggregator_op_test PASSED in 100.1s //tensorflow/compiler/mlir/quantization/tensorflow/cc:const_op_size_test PASSED in 0.3s //tensorflow/compiler/mlir/quantization/tensorflow/cc:constant_fold_test PASSED in 8.5s //tensorflow/compiler/mlir/quantization/tensorflow/cc:convert_asset_args_test PASSED in 4.7s //tensorflow/compiler/mlir/quantization/tensorflow/cc:save_variables_test PASSED in 0.3s //tensorflow/compiler/mlir/quantization/tensorflow/debugging:mlir_dump_test PASSED in 0.2s //tensorflow/compiler/mlir/quantization/tensorflow/ops:tf_op_quant_spec_test PASSED in 0.4s //tensorflow/compiler/mlir/quantization/tensorflow/ops:tf_quantize_op_test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/python:concurrency_test PASSED in 116.2s //tensorflow/compiler/mlir/quantization/tensorflow/python:py_function_lib_py_test PASSED in 23.5s //tensorflow/compiler/mlir/quantization/tensorflow/python:pywrap_quantize_model_test PASSED in 64.3s //tensorflow/compiler/mlir/quantization/tensorflow/python:representative_dataset_test PASSED in 34.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:add_dump_tensor_op.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/quantization/tensorflow/tests:add_dump_tensor_op_stablehlo.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:add_quantization_unit_loc.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:cast_bf16_ops_to_f32.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_custom_aggregation_op_to_quant_stats.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_fake_quant_to_qdq.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_tf_xla_op_to_tf_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_tpu_model_to_cpu.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:duplicate_shape_determining_constants.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:fake_quant_e2e_flow.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:fake_quant_e2e_xla.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_custom_aggregation_ops.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_main_function.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions_drq.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions_weight_only.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_restore_op.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_save_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:issue_ids_of_custom_aggregation_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_hashtable_ops_as_args.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_drq.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_drq_min_elements.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_xla.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_xla_selective_quantization.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:mark_functions_noinline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_duplicate_resource_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_initializer_function_ops_to_main.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_save_function_ops_to_main.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:optimize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_lifting.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_drq.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_drq_per_channel.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_ptq.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_ptq_per_channel.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:preprocess_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:preprocess_op_weight_only.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:propagate_quantize_type.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composit_functions_debugging.mlir.test PASSED in 3.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_drq.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_weight_only.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_xla.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_drq.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_weights.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_xla.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:remove_var_init_by_const.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:replace_cast_hacks_with_tf_xla_ops.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:replace_cast_hacks_with_tf_xla_ops_large_constants.mlir.test PASSED in 10.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:unfreeze_constants.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/utils:tf_to_uniform_attribute_utils_test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/utils:tf_to_xla_attribute_utils_test PASSED in 27.2s //tensorflow/compiler/mlir/stablehlo:stablehlo_test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow:bridge_logger_test PASSED in 4.7s //tensorflow/compiler/mlir/tensorflow:call_graph_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:cluster_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:convert_tensor_test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow:convert_type_test PASSED in 0.1s //tensorflow/compiler/mlir/tensorflow:data_dumper_logger_config_test PASSED in 4.8s //tensorflow/compiler/mlir/tensorflow:device_util_test PASSED in 0.2s //tensorflow/compiler/mlir/tensorflow:dump_graph_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:dump_mlir_util_test PASSED in 10.7s //tensorflow/compiler/mlir/tensorflow:error_util_test PASSED in 0.1s //tensorflow/compiler/mlir/tensorflow:tf_saved_model_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:tpu_rewrite_device_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:xla_rewrite_util_test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:add_functions_for_exported_names.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:annotate-parameter-replication.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:batchmatmul_to_einsum.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:breakup-islands.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:cannonicalize_ops_outside_compilation.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:canonicalize.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:canonicalize_compile_and_replicate_attributes.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:check_control_dependencies.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:cluster_formation.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:cluster_ops_by_policy.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:cluster_outlining.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:cluster_tf_ops_pass.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:colocate_tpu_copy_with_dynamic_shape.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:constant-fold.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:constant_op_device_assignment.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:convert-tf-control-flow-to-scf.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:convert_control_to_data_outputs.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:convert_launch_func_to_tf_call.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:convert_session_initializer_to_function.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:convert_to_legacy_compile_and_replicate_attributes.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:decompose_reduce_dataset.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:decompose_resource_ops.mlir.test PASSED in 4.4s //tensorflow/compiler/mlir/tensorflow/tests:device_assignment.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:device_assignment_by_func_attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:device_attribute_to_launch.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:device_canonicalize.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:device_copy.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:drop_while_shape_invariant.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:einsum.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:embedding_pipelining.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:embedding_program_key.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:embedding_sequencing.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:empty-main.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:end-to-end-tpu-reshard-variables.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:executor_canonicalize.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:executor_island_coarsening.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:executor_island_materialize_const.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:extract_head_tail_outside_compilation.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:extract_outside_compilation.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:extract_tpu_copy_with_dynamic_shape_op.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:fold-broadcast.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:freeze_variables.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:func-attr-invalid.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:func-attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:functional-control-flow-to-cfg.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:functional-control-flow-to-regions.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:functionalize-if-fail.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:functionalize-if.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:fused_kernel_matcher.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:gpu_fusion.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:graph_pruning.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:graph_pruning_preserve_ops.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:group_by_dialect.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:guarantee-all-funcs-one-use.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:hoist_broadcast_read.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:hoist_loop_invariant.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:hoist_replicate_invariant_resource_writes.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import_invalid.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import_saved_model.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:inlining.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:isolate-placer.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:launch_outlining.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:launch_to_device_attribute.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:launch_to_device_attribute_legacy.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_gpu_cc_60.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_gpu_cc_70.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_to_nchw.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_to_nhwc.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_move_transposes_begin.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_move_transposes_end.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_to_nchw.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_to_nhwc.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg_arg_control_dep.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg_with_control_flow.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:localize_var_handles.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:lower_globals_to_ml_program.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:lower_globals_to_ml_program_invalid.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:lower_quantized.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:lower_tf.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:lower_variable_ops_to_ml_program.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:mark_input_output_aliases.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:mark_ops_for_outside_compilation.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:materialize_passthrough_op.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:merge_control_flow.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:mlprogram.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:move_tpu_compile_to_front.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:name_anonymous_iterators.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:optimize-arg-operand-constraint.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:optimize.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:order_by_dialect.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:parallel_execute_to_islands.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:parallel_execute_to_islands_legacy.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:prepare_tpu_computation_for_tf_export.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:print.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:promote_resources_to_args.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:promote_resources_to_args_functions.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:promote_var_handles_to_args.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:readonly_references_to_resources.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:region-control-flow-to-functional.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:remove_unused_arguments.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:remove_unused_while_results.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:replica_id_to_device_ordinal.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:replicate_invariant_op_hoisting.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:replicate_tensor_list_init_ops.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:replicate_to_island.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:replicate_to_island_legacy.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:resource-alias-analysis-test.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:resource-device-inference.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:resource_analyzer.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:resource_inlining.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:resource_op_lifting.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:rewrite_tpu_embedding_ops.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:roundtrip-tf-executor.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:shape_inference.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:shape_inference_with_shape_specialization.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:side-effect-analysis-test.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:sink_constant.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:split_into_island_per_op.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:stack_ops_decomposition.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:strip_noinline.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:strip_saved_module_metadata.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:strip_tf_attributes.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tensor_array_ops_decomposition.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:tensor_list_ops_decomposition.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tf-executor-to-functional.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf-functional-to-executor.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tf-ops.mlir.test PASSED in 4.2s //tensorflow/compiler/mlir/tensorflow/tests:tf-reduce-identity.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_data_fuse_map_and_batch.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_data_fuse_pmap_and_batch.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_index_selector.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_ops.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_ops_invalid.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_invalid.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_location_roundtrip.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_printer.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_side_effect.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tf_optimize.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_asset_sinking.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_deduplicate_bound_input_bindings.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_assets.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_global_tensors.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_global_tensors_mutable_tensors.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_initialize_variables_in_session_init.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_initialize_variables_in_session_init_fail.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_lift_variables.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_lift_variables_invalid_session.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_mark_initialized_variables.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_ops_invalid.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_optimize_global_tensors.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_optimize_global_tensors_interprocedural.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_remove_vars_in_session_initializer.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_side_effect.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_trait_folds.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tfrt_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-annotate-dynamic-shape-inputs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-cluster-cleanup-attributes.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-dynamic-layout-pass.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu-merge-variables-with-execute.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-multiple-while-body-func.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu-resource-read-for-write.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-variable-runtime-reformatting.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_cluster_formation.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_colocate_composite_resource_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_colocate_splits.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_device_propagation.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_host_computation_expansion.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_identity_pruning.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_parallel_execute_sink_resource_write.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_partitioned_op_conversion.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_reorder_replicate_and_partitioned_inputs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_resource_partitioning.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_rewrite.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:tpu_sharding_identification.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu_space_to_depth_pass.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_tail_with_tobool_op.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu_update_embedding_enqueue_op_inputs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_validate_inputs.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:transpose-op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:unroll-batch-matmul.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:update_control_dependencies.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:verify_for_export.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:warn_when_using_deprecated_dumps.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:while_licm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:xla_broadcast.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:xla_call_module_deserialization.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:xla_call_module_round_trip.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:xla_call_module_serialization.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:xla_cluster_formation.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:xla_inline_device_ops.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:xla_rewrite.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:xla_sharding_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow/tests:xla_validate_iputs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:add.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:argument-sharding-invalid.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:argument-sharding.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:constant-folding-hook.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:constant-folding.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:convert_mhlo_quant_to_int.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph-resource.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph-resource.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:mlir-module-serialized-str-attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:replicate-tensor-list-init-ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:result-sharding.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:serialized-mlir-module-str-attr-invalid.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:serialized-mlir-module-str-attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:shape-inference-after-legalization.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:shape-inference.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:stablehlo_add.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_coarsening:executor_tpuv1_island_coarsening.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_coarsening:while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_inlining:executor_tpuv1_inline_tpu_island.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_inlining:while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:case_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:executor_tpuv1_outline_tpu_island.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:add.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-as-fetch.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-control-dep.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-data-type-with-subtype.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-data-type.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-multi-data-type-with-subtype.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-retval-attrs.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:case_op.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:const-values.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:device-arg-retval-attr.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:empty-input-shapes.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:empty-value-attr.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:feed-as-fetch.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:feed-control-dep.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:force_shared_name_for_resource_ops.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:function-func-attr.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:functional-if-ops.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:functional-while-ops.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function-control-ret.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function-retval-of-arg.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-custom-operation.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-default-attr.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-device-retval.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-empty-tensor-content.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-func-attr.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-call.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-control-ret-diff-island.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-control-ret-same-island.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-defs.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-input-shapes.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-name-bug.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-resource-args.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-gradient-def.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-input-func-arg-name-collision.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-library.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-malformed.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-scalar-input.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-uint8-return.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-undefined-output.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-version-info.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-while-loop.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:invalid-output-index.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:legacy-fed-input-without-inputs.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:merge_node_with_function.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:mlir_passthrough_op.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:multi-output-feeds.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:multiple-use-next-iteration.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:node-locations.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:output-shapes-attr.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:output-shapes.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:parse_example.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:parse_example_v2.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:partial-device-name.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:prune_unused_nodes.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:quint8-const.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:shape-attrs.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:stateful-attribute.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:string-attr.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:switch_n.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:target.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:tensor-list.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:tf-data-pipeline.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:unregistered_kernel.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir/batch_use_same_function:saved_model.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graph:convert_tensor.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:aliasing_arg_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:case.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:convert_tensor.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:derived_shape_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:derived_size_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:device-arg-retval-attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:export_main_to_flib.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:fetch_feed_names.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:func_attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:func_list_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-control-ret.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-order.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-resource-args-handle-info.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-resource-args.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:functional-if-ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:functional-while-ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:graph-as-function.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:infer_derived_attribute.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:invalid_input.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:legalized_name.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:missing-main.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:noop.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:optional_symbol_ref.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:output-shapes-attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:parse_example.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:parse_example_v2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:preserve-entry-func-names.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:ref-type-attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:ref-while-loop.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:shape_list_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:simple.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:simple_tf_dialect_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:stringescape.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:switchn.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf-gradient-attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf-legacy-call.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_add.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_identity_n.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_tpu_embedding_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:type_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:type_list_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:unique_name.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:unique_output_name.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:while-loop.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/tf_to_hlo_pipeline:sccp-post-shape-inference.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/transforms:verify_no_outside_compilation_markers_pass_test PASSED in 12.1s //tensorflow/compiler/mlir/tensorflow/transforms/host_runtime:lower_cluster_to_runtime_ops_test PASSED in 10.5s //tensorflow/compiler/mlir/tensorflow/transforms/host_runtime:tpu_metadata_utils_test PASSED in 10.2s //tensorflow/compiler/mlir/tensorflow/translate:tf_mlir_translate_registration_test PASSED in 12.4s //tensorflow/compiler/mlir/tf2xla/api/v1:cluster_tf_test PASSED in 21.1s //tensorflow/compiler/mlir/tf2xla/api/v1:compile_mlir_util_test PASSED in 4.0s //tensorflow/compiler/mlir/tf2xla/api/v1:compile_tf_graph_test PASSED in 0.3s //tensorflow/compiler/mlir/tf2xla/api/v1:tf_dialect_to_executor_test PASSED in 14.0s //tensorflow/compiler/mlir/tf2xla/api/v2:cluster_tf_test PASSED in 21.5s //tensorflow/compiler/mlir/tf2xla/api/v2:legalize_tf_test PASSED in 16.4s //tensorflow/compiler/mlir/tf2xla/api/v2:tf_dialect_to_executor_test PASSED in 13.9s //tensorflow/compiler/mlir/tf2xla/internal:clustering_bridge_passes_test PASSED in 5.0s //tensorflow/compiler/mlir/tf2xla/internal:compilation_timer_test PASSED in 0.2s //tensorflow/compiler/mlir/tf2xla/internal:legalize_tf_mlir_test PASSED in 15.3s //tensorflow/compiler/mlir/tf2xla/internal:legalize_tf_to_hlo_test PASSED in 16.9s //tensorflow/compiler/mlir/tf2xla/internal:logging_hooks_test PASSED in 13.4s //tensorflow/compiler/mlir/tf2xla/internal:mlir_bridge_pass_util_test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/internal:mlir_pass_instrumentation_test PASSED in 5.4s //tensorflow/compiler/mlir/tf2xla/internal:test_matchers_test PASSED in 4.2s //tensorflow/compiler/mlir/tf2xla/internal/inference:inference_metrics_pass_test PASSED in 12.3s //tensorflow/compiler/mlir/tf2xla/internal/passes:input_metrics_lowering_pass_test PASSED in 12.1s //tensorflow/compiler/mlir/tf2xla/internal/passes:verify_clustering_pass_test PASSED in 11.8s //tensorflow/compiler/mlir/tf2xla/internal/passes:verify_clustering_pass_test.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/internal/passes:verify_input_dialect_to_executor_pass_test.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/internal/utils:dialect_detection_utils_test PASSED in 0.3s //tensorflow/compiler/mlir/tf2xla/tests:adjust-layout.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:hlo_xla_runtime_pipeline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-BatchMatMulV2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-binary-elementwise.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-collective.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-communication.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-include-tf2xla-fallback.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-prefer-tf2xla.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-quant.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-with-tf2xla-hlo-importer.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf.mlir.test PASSED in 7.2s //tensorflow/compiler/mlir/tf2xla/tests:tfxla_device_specific_transformations_cpu.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:tfxla_device_specific_transformations_gpu.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:verify-tfxla-legalization-no-chlo.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:verify-tfxla-legalization.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/transforms:legalization_op_config_test PASSED in 21.7s //tensorflow/compiler/mlir/tf2xla/transforms:tf2xla_rewriter_test PASSED in 11.8s //tensorflow/compiler/mlir/tf2xla/transforms:verify_tfxla_legalization_test PASSED in 11.9s //tensorflow/compiler/mlir/tf2xla/transforms:xla_legalize_targets_test PASSED in 0.4s //tensorflow/compiler/mlir/tf2xla/transforms:xla_legalize_tf_test PASSED in 2.6s //tensorflow/compiler/mlir/tfr:graph_decompose_test PASSED in 35.9s //tensorflow/compiler/mlir/tfr:node_expansion_test PASSED in 58.3s //tensorflow/compiler/mlir/tfr:op_reg_gen_test PASSED in 34.6s //tensorflow/compiler/mlir/tfr:tfr_decompose_ctx_test PASSED in 4.9s //tensorflow/compiler/mlir/tfr:tfr_gen_test PASSED in 25.8s //tensorflow/compiler/mlir/tfr/examples/customization:test_ops_test PASSED in 25.5s //tensorflow/compiler/mlir/tfr/examples/mnist:mnist_ops_test PASSED in 25.0s //tensorflow/compiler/mlir/tfr/examples/pad:pad_ops_test PASSED in 25.0s //tensorflow/compiler/mlir/tfrt/tests:batch_function_fallback_resource_variable_as_captured_tensor.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:batch_function_lowering.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:convert_ref_variables.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:cross_device_transfer.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:deduplicate_if_results.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:fuse_tpu_compile_and_execute_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:hoist_invariant_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:hoist_invariant_ops_mlrt.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:lower_bound_batch_threads.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:optimize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:remove_device_attribute.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:runtime_lowering_gpu.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:runtime_lowering_tpu.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:sink_in_invariant_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:xla_launch_fallback.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:xla_launch_lowering.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests:xla_rewrite.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/analysis:cost_analysis.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/analysis:tensor_array_side_effect_analysis.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/analysis:update_op_cost_in_tfrt_mlir_test PASSED in 0.5s //tensorflow/compiler/mlir/tfrt/tests/ifrt:lower_to_ifrt_restore_variable.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/ifrt:rewrite_cluster_to_ifrt_call.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/ifrt:sink_variable_as_named_array.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/ifrt:tf_identity_propagation.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/ifrt:tf_restore_merging.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/ifrt:tf_restore_pruning.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/ifrt:tf_restore_splitting.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/ir:fallback_opt.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/ir:tfrt_fallback_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tfrt/tests/mlrt:assign_op_key.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/mlrt:async_while.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/mlrt:fuse_mlrt_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/mlrt:inline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/mlrt:parallelization.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/mlrt:rewrite_ifrt_load_variable.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/mlrt:tf_to_mlrt.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tfrt/tests/mlrt:tpu_conversions.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tfrt/tests/mlrt:while_to_map_fn.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:attributes.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:basic.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:batch_function_deduplicate.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:batch_function_deduplicate_failed.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:const_tensor.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:control_flow.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:decompose_resource_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:derived_attrs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:device_conversion.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:errors.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:fallback.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:fallback_canonicalization.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:fallback_inline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:func_attributes.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:func_attributes_multiple_callers.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:func_use_fallback_tensor.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:insert_fallback_tensor_copy.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:merge_tf_if_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:optimize_tf_control_flow_side_effect.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:remove_tf_if_const_args.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:reorder_assert.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:side_effects.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:tf_to_corert_pipeline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:tf_to_corert_pipeline_refvar.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/tests/tf_to_corert:whileop.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tfrt/translate/mlrt:mlir_to_bytecode_test PASSED in 0.1s //tensorflow/compiler/mlir/tools/kernel_gen/tests:buffer_deallocation.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tools/kernel_gen/tests:buffer_reuse.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tools/kernel_gen/tests:bufferize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:copy_cleanup.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tools/kernel_gen/tests:embed_tf_framework.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tools/kernel_gen/tests:func_to_jit_invocations.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:invalid.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:isinf.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:parallel_loops_to_sequential.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tools/kernel_gen/tests:rewrite_tf_framework_assert.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_abi_knowledge.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_framework_legalize_to_llvm.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_kernel_gpu_launch_to_llvm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:convert-tfl-uint8.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:convert_metadata.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tosa/tests:fuse-bias-tf.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:lower-complex-types.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:multi_add.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:retain_call_once_funcs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:strip-quant-types.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tosa/tests:strip_metadata.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:tf-tfl-to-tosa-pipeline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:tf-to-tosa-pipeline.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-dequantize_softmax.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-pipeline-filtered.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-pipeline.mlir.test PASSED in 4.7s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-stateful.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tosa/tests:verify_fully_converted.mlir.test PASSED in 0.6s //tensorflow/compiler/tests:adadelta_test_cpu PASSED in 19.1s //tensorflow/compiler/tests:adagrad_da_test_cpu PASSED in 16.8s //tensorflow/compiler/tests:adagrad_test_cpu PASSED in 15.9s //tensorflow/compiler/tests:adam_test_cpu PASSED in 18.8s //tensorflow/compiler/tests:add_n_test_cpu PASSED in 12.4s //tensorflow/compiler/tests:argminmax_test_cpu PASSED in 21.9s //tensorflow/compiler/tests:argminmax_test_cpu_mlir_bridge_test PASSED in 23.1s //tensorflow/compiler/tests:async_comp_test_cpu PASSED in 62.6s //tensorflow/compiler/tests:bincount_op_test_cpu PASSED in 11.5s //tensorflow/compiler/tests:bucketize_op_test_cpu PASSED in 12.5s //tensorflow/compiler/tests:bucketize_op_test_cpu_mlir_bridge_test PASSED in 12.2s //tensorflow/compiler/tests:case_test_cpu PASSED in 13.0s //tensorflow/compiler/tests:cast_ops_test_cpu PASSED in 11.8s //tensorflow/compiler/tests:cast_ops_test_cpu_mlir_bridge_test PASSED in 11.8s //tensorflow/compiler/tests:categorical_op_test_cpu PASSED in 16.9s //tensorflow/compiler/tests:categorical_op_test_cpu_mlir_bridge_test PASSED in 17.2s //tensorflow/compiler/tests:cholesky_op_test_cpu PASSED in 22.3s //tensorflow/compiler/tests:cholesky_op_test_cpu_mlir_bridge_test PASSED in 19.5s //tensorflow/compiler/tests:clustering_test_cpu PASSED in 12.4s //tensorflow/compiler/tests:clustering_test_cpu_mlir_bridge_test PASSED in 13.1s //tensorflow/compiler/tests:concat_ops_test_cpu PASSED in 15.2s //tensorflow/compiler/tests:concat_ops_test_cpu_mlir_bridge_test PASSED in 15.4s //tensorflow/compiler/tests:cond_test_cpu PASSED in 14.2s //tensorflow/compiler/tests:const_arg_test_cpu PASSED in 12.9s //tensorflow/compiler/tests:const_test_cpu PASSED in 13.3s //tensorflow/compiler/tests:data_format_ops_test_cpu PASSED in 17.6s //tensorflow/compiler/tests:data_format_ops_test_cpu_mlir_bridge_test PASSED in 18.8s //tensorflow/compiler/tests:dense_layer_test_cpu PASSED in 20.4s //tensorflow/compiler/tests:dynamic_slice_ops_test_cpu PASSED in 24.2s //tensorflow/compiler/tests:dynamic_slice_ops_test_cpu_mlir_bridge_test PASSED in 20.4s //tensorflow/compiler/tests:dynamic_stitch_test_cpu PASSED in 11.6s //tensorflow/compiler/tests:dynamic_stitch_test_cpu_mlir_bridge_test PASSED in 10.7s //tensorflow/compiler/tests:eager_test_cpu PASSED in 21.7s //tensorflow/compiler/tests:einsum_op_test_cpu PASSED in 9.7s //tensorflow/compiler/tests:einsum_op_test_cpu_mlir_bridge_test PASSED in 13.3s //tensorflow/compiler/tests:ensure_shape_op_test_cpu PASSED in 10.5s //tensorflow/compiler/tests:extract_image_patches_op_test_cpu PASSED in 11.3s //tensorflow/compiler/tests:extract_image_patches_op_test_cpu_mlir_bridge_test PASSED in 11.9s //tensorflow/compiler/tests:fake_quant_ops_test_cpu PASSED in 21.4s //tensorflow/compiler/tests:fake_quant_ops_test_cpu_mlir_bridge_test PASSED in 28.1s //tensorflow/compiler/tests:fifo_queue_test_cpu PASSED in 11.7s //tensorflow/compiler/tests:fifo_queue_test_cpu_mlir_bridge_test PASSED in 49.7s //tensorflow/compiler/tests:ftrl_ops_test_cpu PASSED in 15.3s //tensorflow/compiler/tests:ftrl_ops_test_cpu_mlir_bridge_test PASSED in 14.3s //tensorflow/compiler/tests:function_test_cpu PASSED in 12.9s //tensorflow/compiler/tests:function_test_cpu_mlir_bridge_test PASSED in 17.0s //tensorflow/compiler/tests:gather_nd_op_test_cpu PASSED in 15.5s //tensorflow/compiler/tests:gather_nd_op_test_cpu_mlir_bridge_test PASSED in 12.2s //tensorflow/compiler/tests:gather_test_cpu PASSED in 74.4s //tensorflow/compiler/tests:gather_test_cpu_mlir_bridge_test PASSED in 81.1s //tensorflow/compiler/tests:image_ops_jit_compile_test_cpu PASSED in 46.4s //tensorflow/compiler/tests:jit_test_cpu PASSED in 70.8s //tensorflow/compiler/tests:listdiff_op_test_cpu PASSED in 17.9s //tensorflow/compiler/tests:listdiff_op_test_cpu_mlir_bridge_test PASSED in 17.7s //tensorflow/compiler/tests:lrn_ops_test_cpu PASSED in 12.9s //tensorflow/compiler/tests:lrn_ops_test_cpu_mlir_bridge_test PASSED in 9.9s //tensorflow/compiler/tests:lstm_test_cpu PASSED in 32.2s //tensorflow/compiler/tests:manip_ops_test_cpu PASSED in 20.0s //tensorflow/compiler/tests:manip_ops_test_cpu_mlir_bridge_test PASSED in 16.4s //tensorflow/compiler/tests:matrix_inverse_op_test_cpu PASSED in 27.3s //tensorflow/compiler/tests:matrix_inverse_op_test_cpu_mlir_bridge_test PASSED in 26.2s //tensorflow/compiler/tests:matrix_solve_op_test_cpu PASSED in 12.7s //tensorflow/compiler/tests:matrix_solve_op_test_cpu_mlir_bridge_test PASSED in 14.6s //tensorflow/compiler/tests:momentum_test_cpu PASSED in 15.7s //tensorflow/compiler/tests:nary_ops_test_cpu PASSED in 13.7s //tensorflow/compiler/tests:nary_ops_test_cpu_mlir_bridge_test PASSED in 13.7s //tensorflow/compiler/tests:nullary_ops_test_cpu PASSED in 13.9s //tensorflow/compiler/tests:nullary_ops_test_cpu_mlir_bridge_test PASSED in 11.6s //tensorflow/compiler/tests:placeholder_test_cpu PASSED in 41.7s //tensorflow/compiler/tests:placeholder_test_cpu_mlir_bridge_test PASSED in 11.3s //tensorflow/compiler/tests:proximal_adagrad_test_cpu PASSED in 13.4s //tensorflow/compiler/tests:proximal_gradient_descent_test_cpu PASSED in 11.8s //tensorflow/compiler/tests:quantized_ops_test_cpu PASSED in 12.0s //tensorflow/compiler/tests:reduce_window_test_cpu PASSED in 11.7s //tensorflow/compiler/tests:reduce_window_test_cpu_mlir_bridge_test PASSED in 11.7s //tensorflow/compiler/tests:repeat_op_test_cpu PASSED in 27.6s //tensorflow/compiler/tests:repeat_op_test_cpu_mlir_bridge_test PASSED in 42.8s //tensorflow/compiler/tests:reshape_op_test_cpu PASSED in 12.0s //tensorflow/compiler/tests:reshape_op_test_cpu_mlir_bridge_test PASSED in 14.2s //tensorflow/compiler/tests:reverse_ops_test_cpu PASSED in 21.3s //tensorflow/compiler/tests:reverse_ops_test_cpu_mlir_bridge_test PASSED in 15.1s //tensorflow/compiler/tests:reverse_sequence_op_test_cpu PASSED in 13.3s //tensorflow/compiler/tests:reverse_sequence_op_test_cpu_mlir_bridge_test PASSED in 14.2s //tensorflow/compiler/tests:rmsprop_test_cpu PASSED in 18.0s //tensorflow/compiler/tests:scatter_nd_op_test_cpu PASSED in 31.0s //tensorflow/compiler/tests:scatter_nd_op_test_cpu_mlir_bridge_test PASSED in 32.8s //tensorflow/compiler/tests:searchsorted_op_test_cpu PASSED in 17.7s //tensorflow/compiler/tests:searchsorted_op_test_cpu_mlir_bridge_test PASSED in 21.0s //tensorflow/compiler/tests:segment_reduction_ops_test_cpu PASSED in 20.0s //tensorflow/compiler/tests:segment_reduction_ops_test_cpu_mlir_bridge_test PASSED in 34.1s //tensorflow/compiler/tests:self_adjoint_eig_op_test_cpu PASSED in 19.8s //tensorflow/compiler/tests:self_adjoint_eig_op_test_cpu_mlir_bridge_test PASSED in 20.5s //tensorflow/compiler/tests:slice_ops_test_cpu PASSED in 25.6s //tensorflow/compiler/tests:slice_ops_test_cpu_mlir_bridge_test PASSED in 45.9s //tensorflow/compiler/tests:sparse_to_dense_op_test_cpu PASSED in 12.6s //tensorflow/compiler/tests:sparse_to_dense_op_test_cpu_mlir_bridge_test PASSED in 12.4s //tensorflow/compiler/tests:stack_ops_test_cpu PASSED in 18.6s //tensorflow/compiler/tests:tensor_float_32_test_cpu PASSED in 14.3s //tensorflow/compiler/tests:tensor_float_32_test_cpu_mlir_bridge_test PASSED in 17.5s //tensorflow/compiler/tests:tensor_list_ops_test_cpu PASSED in 13.9s //tensorflow/compiler/tests:tridiagonal_matmul_ops_test_cpu PASSED in 31.1s //tensorflow/compiler/tests:tridiagonal_matmul_ops_test_cpu_mlir_bridge_test PASSED in 22.4s //tensorflow/compiler/tests:tridiagonal_solve_ops_test_cpu PASSED in 20.4s //tensorflow/compiler/tests:tridiagonal_solve_ops_test_cpu_mlir_bridge_test PASSED in 18.7s //tensorflow/compiler/tests:unique_ops_test_cpu PASSED in 11.3s //tensorflow/compiler/tests:variable_ops_test_cpu PASSED in 37.6s //tensorflow/compiler/tests:variable_ops_test_cpu_mlir_bridge_test PASSED in 23.1s //tensorflow/compiler/tests:where_op_test_cpu PASSED in 12.8s //tensorflow/compiler/tests:while_test_cpu PASSED in 15.3s //tensorflow/compiler/tests:xla_call_module_no_platform_check_test_cpu PASSED in 11.9s //tensorflow/compiler/tests:xla_call_module_no_shape_assertions_check_test_cpu PASSED in 15.9s //tensorflow/compiler/tests:xla_call_module_test_cpu PASSED in 14.9s //tensorflow/compiler/tests:xla_custom_call_ops_test_cpu PASSED in 11.7s //tensorflow/compiler/tests:xla_device_gpu_test_cpu PASSED in 12.9s //tensorflow/compiler/tests:xla_device_test_cpu PASSED in 14.8s //tensorflow/compiler/tests:xla_device_test_cpu_mlir_bridge_test PASSED in 26.8s //tensorflow/compiler/tests:xla_dump_to_test_cpu PASSED in 10.8s //tensorflow/compiler/tests:xla_dump_to_test_cpu_mlir_bridge_test PASSED in 10.8s //tensorflow/compiler/tests:xla_ops_test_cpu PASSED in 56.5s //tensorflow/compiler/tests:xla_ops_test_cpu_mlir_bridge_test PASSED in 51.5s //tensorflow/compiler/tests:xla_test_test PASSED in 11.1s //tensorflow/compiler/tf2xla:const_analysis_test PASSED in 4.1s //tensorflow/compiler/tf2xla:cpu_function_runtime_test PASSED in 0.1s //tensorflow/compiler/tf2xla:functionalize_cond_test PASSED in 0.7s //tensorflow/compiler/tf2xla:functionalize_control_flow_test PASSED in 0.7s //tensorflow/compiler/tf2xla:fused_batchnorm_reserve_space_test_cpu PASSED in 18.7s //tensorflow/compiler/tf2xla:graph_compiler_test PASSED in 4.0s //tensorflow/compiler/tf2xla:literal_util_test PASSED in 0.4s //tensorflow/compiler/tf2xla:resource_operation_table_test PASSED in 4.1s //tensorflow/compiler/tf2xla:resource_util_test_cpu PASSED in 1.8s //tensorflow/compiler/tf2xla:sharding_util_test PASSED in 0.6s //tensorflow/compiler/tf2xla:tf2xla_opset_test PASSED in 6.7s //tensorflow/compiler/tf2xla:tf2xla_test PASSED in 12.4s //tensorflow/compiler/tf2xla:tf2xla_util_test PASSED in 0.7s //tensorflow/compiler/tf2xla:type_util_test PASSED in 0.3s //tensorflow/compiler/tf2xla:xla_compiler_test PASSED in 13.2s //tensorflow/compiler/tf2xla:xla_jit_compiled_cpu_function_test PASSED in 12.4s //tensorflow/compiler/tf2xla:xla_op_registry_test PASSED in 3.9s //tensorflow/compiler/tf2xla/kernels:rng_converter_utils_test PASSED in 1.1s //tensorflow/core:@local_tsl__tsl_lib_core_legacy_lib_core_all_tests PASSED in 0.3s //tensorflow/core:__tensorflow_core_lib_core_legacy_lib_core_all_tests PASSED in 6.0s //tensorflow/core:__tensorflow_core_lib_gtl_legacy_lib_gtl_tests PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_cell_reader_test PASSED in 30.1s //tensorflow/core:__tensorflow_core_lib_monitoring_collection_registry_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_counter_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_gauge_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_metric_def_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_percentile_sampler_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_sampler_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_test_utils_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_strings_legacy_low_level_library_tests PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_wav_wav_io_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_util_mkl_util_test_srcs PASSED in 0.1s //tensorflow/core:lib_strings_ordered_code_test PASSED in 1.2s //tensorflow/core:lib_strings_proto_serialization_test PASSED in 0.1s //tensorflow/core/api_def:api_test PASSED in 2.1s //tensorflow/core/api_def:update_api_def_test PASSED in 0.1s //tensorflow/core/common_runtime:all_to_all_test_cpu PASSED in 0.4s //tensorflow/core/common_runtime:arg_ret_placement_test PASSED in 0.3s //tensorflow/core/common_runtime:buf_rendezvous_test PASSED in 0.6s //tensorflow/core/common_runtime:collective_executor_mgr_test PASSED in 0.6s //tensorflow/core/common_runtime:collective_param_resolver_local_test PASSED in 4.9s //tensorflow/core/common_runtime:collective_rma_local_test PASSED in 0.7s //tensorflow/core/common_runtime:colocate_predecessor_trees_pass_test PASSED in 0.7s //tensorflow/core/common_runtime:composite_device_test PASSED in 0.3s //tensorflow/core/common_runtime:cost_measurement_registry_test PASSED in 1.9s //tensorflow/core/common_runtime:cost_util_test PASSED in 0.1s //tensorflow/core/common_runtime:device_mgr_test PASSED in 0.6s //tensorflow/core/common_runtime:device_propagation_test PASSED in 0.3s //tensorflow/core/common_runtime:device_resolver_local_test PASSED in 0.6s //tensorflow/core/common_runtime:device_set_test PASSED in 0.7s //tensorflow/core/common_runtime:direct_session_test_cpu PASSED in 1.5s //tensorflow/core/common_runtime:direct_session_with_debug_test PASSED in 1.9s //tensorflow/core/common_runtime:direct_session_with_tracking_alloc_test PASSED in 0.9s //tensorflow/core/common_runtime:dynamic_device_mgr_test PASSED in 0.7s //tensorflow/core/common_runtime:eval_const_tensor_test PASSED in 0.4s //tensorflow/core/common_runtime:executor_test PASSED in 1.4s //tensorflow/core/common_runtime:function_optimization_registration_test PASSED in 0.6s //tensorflow/core/common_runtime:function_optimization_registry_no_pass_test PASSED in 0.7s //tensorflow/core/common_runtime:function_optimization_registry_pass_failure_test PASSED in 0.7s //tensorflow/core/common_runtime:function_optimization_registry_test PASSED in 0.6s //tensorflow/core/common_runtime:function_threadpool_test PASSED in 0.8s //tensorflow/core/common_runtime:graph_constructor_test PASSED in 2.0s //tensorflow/core/common_runtime:graph_runner_test PASSED in 0.6s //tensorflow/core/common_runtime:hierarchical_tree_broadcaster_test_cpu PASSED in 2.7s //tensorflow/core/common_runtime:inline_function_utils_test PASSED in 0.4s //tensorflow/core/common_runtime:input_colocation_exemption_registry_test PASSED in 0.3s //tensorflow/core/common_runtime:int32_fulltype_test PASSED in 0.4s //tensorflow/core/common_runtime:isolate_placer_inspection_required_ops_pass_test PASSED in 0.7s //tensorflow/core/common_runtime:lower_case_op_test PASSED in 2.0s //tensorflow/core/common_runtime:lower_function_call_test PASSED in 1.9s //tensorflow/core/common_runtime:lower_functional_ops_test PASSED in 2.1s //tensorflow/core/common_runtime:lower_if_op_test PASSED in 2.1s //tensorflow/core/common_runtime:lower_while_op_test PASSED in 2.3s //tensorflow/core/common_runtime:mkl_cpu_allocator_test PASSED in 0.1s //tensorflow/core/common_runtime:mkl_threadpool_device_test PASSED in 0.1s //tensorflow/core/common_runtime:no_op_cost_measurement_test PASSED in 0.1s //tensorflow/core/common_runtime:null_request_cost_accessor_test PASSED in 0.1s //tensorflow/core/common_runtime:optimization_registry_test PASSED in 0.7s //tensorflow/core/common_runtime:optimize_cross_host_control_deps_test PASSED in 5.5s //tensorflow/core/common_runtime:optimize_function_graph_utils_test PASSED in 0.4s //tensorflow/core/common_runtime:partitioning_utils_test PASSED in 0.4s //tensorflow/core/common_runtime:pending_counts_test PASSED in 0.6s //tensorflow/core/common_runtime:permuter_test_cpu PASSED in 3.0s //tensorflow/core/common_runtime:placer_inspection_required_ops_utils_test PASSED in 0.7s //tensorflow/core/common_runtime:placer_test PASSED in 0.7s //tensorflow/core/common_runtime:process_function_library_runtime_test_cpu PASSED in 0.4s //tensorflow/core/common_runtime:process_util_test PASSED in 0.1s //tensorflow/core/common_runtime:quantize_training_test PASSED in 2.2s //tensorflow/core/common_runtime:rendezvous_util_test PASSED in 0.1s //tensorflow/core/common_runtime:replicate_constants_pass_test PASSED in 0.7s //tensorflow/core/common_runtime:replicate_per_replica_nodes_test PASSED in 0.4s //tensorflow/core/common_runtime:request_cost_accessor_registry_test PASSED in 1.7s //tensorflow/core/common_runtime:request_cost_test PASSED in 0.1s //tensorflow/core/common_runtime:ring_gatherer_test_cpu PASSED in 2.3s //tensorflow/core/common_runtime:ring_reducer_test_cpu PASSED in 4.7s //tensorflow/core/common_runtime:scoped_allocator_mgr_test PASSED in 4.5s //tensorflow/core/common_runtime:session_test PASSED in 0.6s //tensorflow/core/common_runtime:shape_refiner_test PASSED in 0.6s //tensorflow/core/common_runtime:single_threaded_executor_test PASSED in 0.7s //tensorflow/core/common_runtime:threadpool_device_test PASSED in 0.7s //tensorflow/core/common_runtime:type_inference_test PASSED in 1.9s //tensorflow/core/common_runtime/eager:attr_builder_test PASSED in 22.2s //tensorflow/core/common_runtime/eager:context_test PASSED in 10.8s //tensorflow/core/common_runtime/eager:custom_device_test PASSED in 8.3s //tensorflow/core/common_runtime/eager:eager_executor_test PASSED in 8.7s //tensorflow/core/common_runtime/eager:eager_op_rewrite_registry_test PASSED in 0.6s //tensorflow/core/common_runtime/eager:eager_operation_test PASSED in 8.0s //tensorflow/core/common_runtime/eager:execute_node_test PASSED in 9.3s //tensorflow/core/common_runtime/eager:execute_test PASSED in 21.8s //tensorflow/core/common_runtime/eager:kernel_and_device_test PASSED in 1.1s //tensorflow/core/common_runtime/eager:mkl_eager_op_rewrite_test PASSED in 12.8s //tensorflow/core/common_runtime/eager:placement_test PASSED in 8.9s //tensorflow/core/common_runtime/eager:placement_utils_test PASSED in 9.5s //tensorflow/core/common_runtime/eager:summary_optimizer_test PASSED in 0.1s //tensorflow/core/common_runtime/eager:tensor_handle_data_test PASSED in 8.4s //tensorflow/core/common_runtime/eager:tensor_handle_test PASSED in 8.5s //tensorflow/core/common_runtime/gpu:gpu_device_on_non_gpu_machine_test PASSED in 0.1s //tensorflow/core/common_runtime/gpu:gpu_serving_device_selector_test PASSED in 0.1s //tensorflow/core/common_runtime/next_pluggable_device:c_plugin_coordination_service_agent_test PASSED in 3.3s //tensorflow/core/common_runtime/next_pluggable_device/c:plugin_c_api_test PASSED in 23.6s //tensorflow/core/common_runtime/next_pluggable_device/c:tf_rendezvous_c_api_test PASSED in 0.1s //tensorflow/core/config:flags_py_test PASSED in 8.1s //tensorflow/core/config:flags_test PASSED in 0.1s //tensorflow/core/data:compression_utils_test PASSED in 1.4s //tensorflow/core/data:dataset_utils_test PASSED in 0.4s //tensorflow/core/data:hash_utils_test PASSED in 0.6s //tensorflow/core/data:metric_utils_test PASSED in 5.6s //tensorflow/core/data:name_utils_test PASSED in 0.1s //tensorflow/core/data:rewrite_utils_test PASSED in 0.4s //tensorflow/core/data:serialization_utils_test PASSED in 0.4s //tensorflow/core/data:snapshot_utils_test PASSED in 0.4s //tensorflow/core/data:split_utils_test PASSED in 0.3s //tensorflow/core/data:standalone_save_restore_test PASSED in 1.4s //tensorflow/core/data:standalone_test PASSED in 4.3s //tensorflow/core/data:tfdataz_metrics_test PASSED in 1.8s //tensorflow/core/data:unbounded_thread_pool_test PASSED in 0.3s //tensorflow/core/data:utils_test PASSED in 0.1s //tensorflow/core/data/service:auto_scaler_test PASSED in 0.1s //tensorflow/core/data/service:byte_size_test PASSED in 0.1s //tensorflow/core/data/service:common_test PASSED in 0.1s //tensorflow/core/data/service:credentials_factory_test PASSED in 0.4s //tensorflow/core/data/service:cross_trainer_cache_test PASSED in 1.3s //tensorflow/core/data/service:data_service_test PASSED in 9.1s //tensorflow/core/data/service:data_transfer_test PASSED in 0.4s //tensorflow/core/data/service:dataset_store_test PASSED in 0.5s //tensorflow/core/data/service:dispatcher_client_test PASSED in 2.5s //tensorflow/core/data/service:dispatcher_state_test PASSED in 0.4s //tensorflow/core/data/service:graph_rewriters_test PASSED in 0.5s //tensorflow/core/data/service:grpc_dispatcher_impl_test PASSED in 1.9s //tensorflow/core/data/service:grpc_util_test PASSED in 0.5s //tensorflow/core/data/service:grpc_worker_impl_test PASSED in 1.9s //tensorflow/core/data/service:journal_test PASSED in 0.4s //tensorflow/core/data/service:split_provider_test PASSED in 1.8s //tensorflow/core/data/service:task_runner_test PASSED in 2.6s //tensorflow/core/data/service:test_util_test PASSED in 1.6s //tensorflow/core/data/service:url_test PASSED in 0.1s //tensorflow/core/data/service:utils_test PASSED in 0.4s //tensorflow/core/data/service:validate_utils_test PASSED in 0.1s //tensorflow/core/data/service:worker_client_test PASSED in 2.1s //tensorflow/core/data/service:worker_impl_test PASSED in 2.0s //tensorflow/core/data/service/client:data_service_client_test PASSED in 2.4s //tensorflow/core/data/service/client:utils_test PASSED in 2.0s //tensorflow/core/data/service/client:validate_utils_test PASSED in 1.5s //tensorflow/core/data/service/snapshot:distributed_snapshot_test PASSED in 16.5s //tensorflow/core/data/service/snapshot:file_utils_test PASSED in 0.4s //tensorflow/core/data/service/snapshot:parallel_tfrecord_writer_test PASSED in 2.9s //tensorflow/core/data/service/snapshot:path_utils_test PASSED in 0.1s //tensorflow/core/data/service/snapshot:prefetched_split_provider_test PASSED in 13.6s //tensorflow/core/data/service/snapshot:snapshot_chunk_provider_test PASSED in 0.4s //tensorflow/core/data/service/snapshot:snapshot_manager_test PASSED in 1.8s //tensorflow/core/data/service/snapshot:snapshot_split_provider_test PASSED in 0.5s //tensorflow/core/data/service/snapshot:snapshot_stream_writer_checkpoint_test PASSED in 2.2s //tensorflow/core/data/service/snapshot:snapshot_stream_writer_test PASSED in 1.8s //tensorflow/core/data/service/snapshot:utils_test PASSED in 0.1s //tensorflow/core/debug:debug_graph_utils_test PASSED in 0.3s //tensorflow/core/distributed_runtime:call_options_test PASSED in 0.1s //tensorflow/core/distributed_runtime:cluster_function_library_runtime_test PASSED in 3.6s //tensorflow/core/distributed_runtime:collective_param_resolver_distributed_test PASSED in 0.6s //tensorflow/core/distributed_runtime:collective_rma_distributed_test PASSED in 0.4s //tensorflow/core/distributed_runtime:device_resolver_distributed_test PASSED in 0.4s //tensorflow/core/distributed_runtime:message_wrappers_test PASSED in 0.1s //tensorflow/core/distributed_runtime:partial_run_mgr_test PASSED in 0.3s //tensorflow/core/distributed_runtime:recent_request_ids_test PASSED in 0.1s //tensorflow/core/distributed_runtime:request_id_test PASSED in 0.4s //tensorflow/core/distributed_runtime:rpc_collective_executor_mgr_test PASSED in 0.5s //tensorflow/core/distributed_runtime:server_lib_test PASSED in 0.1s //tensorflow/core/distributed_runtime:session_mgr_test PASSED in 0.6s //tensorflow/core/distributed_runtime:tensor_coding_test PASSED in 0.1s //tensorflow/core/distributed_runtime/coordination:coordination_service_barrier_proxy_test PASSED in 2.1s //tensorflow/core/distributed_runtime/eager:eager_service_impl_test PASSED in 21.2s //tensorflow/core/distributed_runtime/eager:remote_mgr_test PASSED in 9.4s //tensorflow/core/distributed_runtime/integration_test:c_api_multi_client_test_cpu PASSED in 23.3s //tensorflow/core/distributed_runtime/integration_test:c_api_recoverable_jobs_test_cpu PASSED in 32.9s //tensorflow/core/distributed_runtime/integration_test:c_api_session_coordination_test_cpu PASSED in 21.6s //tensorflow/core/distributed_runtime/rpc:grpc_tensor_coding_test PASSED in 2.7s //tensorflow/core/distributed_runtime/rpc:grpc_worker_cache_test PASSED in 0.6s //tensorflow/core/distributed_runtime/rpc/eager:grpc_eager_client_test PASSED in 0.5s //tensorflow/core/example:example_parser_configuration_test PASSED in 0.8s //tensorflow/core/example:feature_util_test PASSED in 0.1s //tensorflow/core/framework:allocator_test PASSED in 4.5s //tensorflow/core/framework:attr_value_util_test PASSED in 0.8s //tensorflow/core/framework:batch_util_test PASSED in 0.8s //tensorflow/core/framework:bfloat16_test PASSED in 0.8s //tensorflow/core/framework:common_shape_fns_test PASSED in 0.8s //tensorflow/core/framework:dataset_test PASSED in 0.7s //tensorflow/core/framework:device_base_test PASSED in 0.7s //tensorflow/core/framework:disable_jit_test PASSED in 0.7s //tensorflow/core/framework:framework_op_gen_lib_test PASSED in 0.1s //tensorflow/core/framework:framework_op_segment_test PASSED in 0.7s //tensorflow/core/framework:framework_resource_var_test PASSED in 0.1s //tensorflow/core/framework:framework_run_handler_test PASSED in 1.6s //tensorflow/core/framework:framework_run_handler_util_test PASSED in 2.0s //tensorflow/core/framework:full_type_inference_util_test PASSED in 0.6s //tensorflow/core/framework:full_type_util_test PASSED in 0.7s //tensorflow/core/framework:function_test PASSED in 0.7s //tensorflow/core/framework:graph_def_util_test PASSED in 0.7s //tensorflow/core/framework:graph_to_functiondef_test PASSED in 0.7s //tensorflow/core/framework:kernel_def_builder_test PASSED in 0.7s //tensorflow/core/framework:kernel_def_util_test PASSED in 0.7s //tensorflow/core/framework:memory_types_test PASSED in 0.7s //tensorflow/core/framework:model_test PASSED in 0.7s //tensorflow/core/framework:node_def_builder_test PASSED in 0.7s //tensorflow/core/framework:node_def_util_test PASSED in 0.7s //tensorflow/core/framework:node_properties_test PASSED in 0.7s //tensorflow/core/framework:op_compatibility_test PASSED in 0.7s //tensorflow/core/framework:op_def_builder_test PASSED in 0.7s //tensorflow/core/framework:op_def_util_test PASSED in 0.7s //tensorflow/core/framework:op_kernel_test PASSED in 0.7s //tensorflow/core/framework:op_registration_test PASSED in 0.7s //tensorflow/core/framework:partial_tensor_shape_test PASSED in 0.7s //tensorflow/core/framework:rendezvous_test PASSED in 2.7s //tensorflow/core/framework:resource_handle_test PASSED in 0.1s //tensorflow/core/framework:resource_mgr_test PASSED in 1.7s //tensorflow/core/framework:resource_op_kernel_test PASSED in 0.7s //tensorflow/core/framework:shape_inference_test PASSED in 0.7s //tensorflow/core/framework:shape_inference_testutil_test PASSED in 0.7s //tensorflow/core/framework:tensor_matcher_test PASSED in 0.7s //tensorflow/core/framework:tensor_shape_test PASSED in 7.6s //tensorflow/core/framework:tensor_slice_test PASSED in 0.7s //tensorflow/core/framework:tensor_test PASSED in 45.3s //tensorflow/core/framework:tensor_testutil_test PASSED in 0.7s //tensorflow/core/framework:tensor_util_test PASSED in 0.7s //tensorflow/core/framework:tracking_allocator_test PASSED in 0.7s //tensorflow/core/framework:types_test PASSED in 0.7s //tensorflow/core/framework:variant_op_registry_test PASSED in 29.9s //tensorflow/core/framework:variant_test PASSED in 0.7s //tensorflow/core/framework/registration:registration_test PASSED in 0.3s //tensorflow/core/function/capture:by_ref_capture_test PASSED in 8.6s //tensorflow/core/function/capture:capture_container_test PASSED in 8.3s //tensorflow/core/function/integration_test:side_inputs_manual_api_test PASSED in 26.6s //tensorflow/core/function/integration_test:side_inputs_test PASSED in 26.9s //tensorflow/core/function/polymorphism:function_cache_test PASSED in 9.0s //tensorflow/core/function/polymorphism:function_type_test PASSED in 13.0s //tensorflow/core/function/polymorphism:type_dispatch_test PASSED in 12.9s //tensorflow/core/function/runtime_client:runtime_client_cc_test PASSED in 33.5s //tensorflow/core/function/trace_type:custom_nest_trace_type_test PASSED in 9.2s //tensorflow/core/function/trace_type:default_types_test PASSED in 9.7s //tensorflow/core/function/trace_type:serialization_test PASSED in 9.5s //tensorflow/core/function/trace_type:trace_type_test PASSED in 12.8s //tensorflow/core/graph:algorithm_test PASSED in 0.7s //tensorflow/core/graph:collective_order_test PASSED in 0.4s //tensorflow/core/graph:control_flow_test PASSED in 0.7s //tensorflow/core/graph:costmodel_test PASSED in 0.7s //tensorflow/core/graph:edgeset_test PASSED in 0.7s //tensorflow/core/graph:graph_debug_info_builder_test PASSED in 0.6s //tensorflow/core/graph:graph_def_builder_test PASSED in 0.7s //tensorflow/core/graph:graph_partition_test PASSED in 0.7s //tensorflow/core/graph:graph_test PASSED in 0.7s //tensorflow/core/graph:node_builder_test PASSED in 0.7s //tensorflow/core/graph:optimizer_cse_test PASSED in 0.7s //tensorflow/core/graph:subgraph_test PASSED in 0.7s //tensorflow/core/graph:tensor_id_test PASSED in 0.7s //tensorflow/core/graph:validate_test PASSED in 0.7s //tensorflow/core/graph/regularization:simple_delete_test PASSED in 0.2s //tensorflow/core/graph/regularization:util_test PASSED in 0.1s //tensorflow/core/grappler:graph_topology_view_test PASSED in 0.1s //tensorflow/core/grappler:graph_view_test PASSED in 1.3s //tensorflow/core/grappler:grappler_item_builder_test PASSED in 1.4s //tensorflow/core/grappler:grappler_item_test PASSED in 1.3s //tensorflow/core/grappler:mutable_graph_view_test PASSED in 1.4s //tensorflow/core/grappler:utils_test PASSED in 2.4s //tensorflow/core/grappler/clusters:single_machine_test PASSED in 23.0s //tensorflow/core/grappler/clusters:virtual_cluster_test PASSED in 1.3s //tensorflow/core/grappler/costs:analytical_cost_estimator_test PASSED in 1.8s //tensorflow/core/grappler/costs:cost_estimator_test PASSED in 0.1s //tensorflow/core/grappler/costs:graph_memory_test PASSED in 1.3s //tensorflow/core/grappler/costs:graph_properties_test PASSED in 2.5s //tensorflow/core/grappler/costs:robust_stats_test PASSED in 0.1s //tensorflow/core/grappler/costs:utils_test PASSED in 1.3s //tensorflow/core/grappler/costs:virtual_placer_test PASSED in 0.3s //tensorflow/core/grappler/costs:virtual_scheduler_test PASSED in 2.1s //tensorflow/core/grappler/graph_analyzer:gen_node_test PASSED in 1.7s //tensorflow/core/grappler/graph_analyzer:graph_analyzer_test PASSED in 1.7s //tensorflow/core/grappler/graph_analyzer:hash_tools_test PASSED in 1.6s //tensorflow/core/grappler/graph_analyzer:sig_node_test PASSED in 2.4s //tensorflow/core/grappler/graph_analyzer:subgraph_test PASSED in 1.7s //tensorflow/core/grappler/inputs:utils_test PASSED in 0.1s //tensorflow/core/grappler/optimizers:arithmetic_optimizer_test_cpu PASSED in 3.2s //tensorflow/core/grappler/optimizers:auto_mixed_precision_test_cpu PASSED in 2.0s //tensorflow/core/grappler/optimizers:auto_parallel_test_cpu PASSED in 2.0s //tensorflow/core/grappler/optimizers:common_subgraph_elimination_test_cpu PASSED in 1.8s //tensorflow/core/grappler/optimizers:custom_graph_optimizer_registry_test_cpu PASSED in 3.7s //tensorflow/core/grappler/optimizers:debug_stripper_test_cpu PASSED in 2.1s //tensorflow/core/grappler/optimizers:dependency_optimizer_test_cpu PASSED in 1.7s //tensorflow/core/grappler/optimizers:evaluation_utils_test PASSED in 0.3s //tensorflow/core/grappler/optimizers:function_api_info_test PASSED in 0.1s //tensorflow/core/grappler/optimizers:function_optimizer_test_cpu PASSED in 2.4s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_test_cpu PASSED in 2.1s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_transposer_factory_test PASSED in 0.2s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_transposer_test_cpu PASSED in 1.7s //tensorflow/core/grappler/optimizers:graph_optimizer_stage_test_cpu PASSED in 1.7s //tensorflow/core/grappler/optimizers:implementation_selector_test PASSED in 1.9s //tensorflow/core/grappler/optimizers:loop_optimizer_test_cpu PASSED in 1.8s //tensorflow/core/grappler/optimizers:memory_optimizer_test_cpu PASSED in 1.8s //tensorflow/core/grappler/optimizers:meta_optimizer_test_cpu PASSED in 7.1s //tensorflow/core/grappler/optimizers:mkl_remapper_test PASSED in 2.4s //tensorflow/core/grappler/optimizers:model_pruner_test_cpu PASSED in 1.9s //tensorflow/core/grappler/optimizers:pin_to_host_optimizer_test_cpu PASSED in 2.1s //tensorflow/core/grappler/optimizers:remapper_test_cpu PASSED in 7.6s //tensorflow/core/grappler/optimizers:scoped_allocator_optimizer_test PASSED in 1.9s //tensorflow/core/grappler/optimizers:shape_optimizer_test_cpu PASSED in 1.8s //tensorflow/core/grappler/optimizers:static_schedule_test_cpu PASSED in 1.3s //tensorflow/core/grappler/optimizers:tfg_optimizer_hook_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:auto_shard_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:autotune_buffer_sizes_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:batch_parallelization_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:disable_intra_op_parallelism_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:disable_prefetch_legacy_autotune_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:enable_gradient_descent_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:filter_fusion_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:filter_parallelization_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:function_utils_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:fusion_utils_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:graph_utils_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:inject_io_prefetch_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:inject_prefetch_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:make_deterministic_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:make_sloppy_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:map_and_batch_fusion_test PASSED in 0.3s //tensorflow/core/grappler/optimizers/data:map_and_filter_fusion_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:map_fusion_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:map_parallelization_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:noop_elimination_test PASSED in 0.3s //tensorflow/core/grappler/optimizers/data:parallel_batch_test PASSED in 0.3s //tensorflow/core/grappler/optimizers/data:remove_compression_map_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:replicate_on_split_test PASSED in 0.3s //tensorflow/core/grappler/optimizers/data:seq_interleave_prefetch_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:shuffle_and_repeat_fusion_test PASSED in 0.3s //tensorflow/core/grappler/optimizers/data:slack_test PASSED in 0.6s //tensorflow/core/grappler/optimizers/data:split_utils_test PASSED in 1.3s //tensorflow/core/grappler/optimizers/data:use_private_thread_pool_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/inference:batch_op_rewriter_test PASSED in 0.1s //tensorflow/core/grappler/utils:canonicalizer_test PASSED in 1.2s //tensorflow/core/grappler/utils:colocation_test PASSED in 0.4s //tensorflow/core/grappler/utils:frame_test PASSED in 0.1s //tensorflow/core/grappler/utils:functions_test PASSED in 1.3s //tensorflow/core/grappler/utils:graph_view_internal_test PASSED in 0.4s //tensorflow/core/grappler/utils:graph_view_test PASSED in 1.8s //tensorflow/core/grappler/utils:grappler_test_test PASSED in 11.1s //tensorflow/core/grappler/utils:pattern_utils_test PASSED in 0.4s //tensorflow/core/grappler/utils:scc_test PASSED in 1.3s //tensorflow/core/grappler/utils:symbolic_shapes_test PASSED in 0.1s //tensorflow/core/grappler/utils:topological_sort_test PASSED in 0.3s //tensorflow/core/grappler/utils:tpu_test PASSED in 0.1s //tensorflow/core/grappler/utils:transitive_fanin_test PASSED in 0.3s //tensorflow/core/grappler/utils:traversal_test PASSED in 0.3s //tensorflow/core/grappler/verifiers:structure_verifier_test PASSED in 0.4s //tensorflow/core/ir:interfaces_test PASSED in 0.1s //tensorflow/core/ir:ops_test PASSED in 0.1s //tensorflow/core/ir:shape_inference_utils_test PASSED in 0.2s //tensorflow/core/ir:tf_op_registry_test PASSED in 0.2s //tensorflow/core/ir:tf_op_wrapper_test PASSED in 0.1s //tensorflow/core/ir:utility_test PASSED in 0.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:arg_as_control_ret.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:backedge_segment.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:empty.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:error_during_backedge.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_case_with_attr_inference.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_if_with_attr_inference.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_iterator_get_next_attr_inference.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_underscore_output_shapes.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_while_with_attr_inference.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infeed_dequeue.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infer_arg_handle_type.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infer_with_output_shapes.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_arg_name.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_backedge_input_size.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_duplicated_node_name.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_edge_index.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_edge_name.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_attr_key.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_func_attr_key.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_func_attr_name.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_op_type.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_func_with_empty_name.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_function_import.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_control_result.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_input.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_name.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_result.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_function_attr_name.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_function_named_edge_index.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_handle_data.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_input.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_result.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_result_value.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_data_result.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_data_result_value.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_input.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_two_inputs.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_named_edge_index.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_op_name.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_type_list.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:legacy_call.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:negative_shape.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:negative_zero_constant.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:three_nodes_with_attrs.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:version.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:empty.mlir.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:fulltype.mlir.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:func_with_no_args_or_results.mlir.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:negative_zero_constant.mlir.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:nested_legacy_call.mlir.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:three_nodes_with_attrs.mlir.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:version.mlir.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/saved_model:saved_model_roundtrip_test PASSED in 0.2s //tensorflow/core/ir/tests:attributes.mlir.test PASSED in 1.3s //tensorflow/core/ir/tests:canonicalize.mlir.test PASSED in 1.2s //tensorflow/core/ir/tests:compatible_types.mlir.test PASSED in 1.2s //tensorflow/core/ir/tests:concrete-ops.mlir.test PASSED in 1.1s //tensorflow/core/ir/tests:generic_concrete_ops.mlir.test PASSED in 1.9s //tensorflow/core/ir/tests:invalid-concrete-ops.mlir.test PASSED in 2.0s //tensorflow/core/ir/tests:invalid-preserved-attrs.mlir.test PASSED in 1.4s //tensorflow/core/ir/tests:invalid.mlir.test PASSED in 1.3s //tensorflow/core/ir/tests:invalid_types.mlir.test PASSED in 2.0s //tensorflow/core/ir/tests:ops.mlir.test PASSED in 1.9s //tensorflow/core/ir/tests:region-invalid-ops.mlir.test PASSED in 1.8s //tensorflow/core/ir/tests:region-ops-graph.mlir.test PASSED in 1.1s //tensorflow/core/ir/tests:region-ops.mlir.test PASSED in 1.1s //tensorflow/core/ir/tests:types.mlir.test PASSED in 2.1s //tensorflow/core/ir/types:dialect_test PASSED in 0.1s //tensorflow/core/kernels:as_string_op_test PASSED in 0.7s //tensorflow/core/kernels:basic_ops_benchmark_test PASSED in 0.4s //tensorflow/core/kernels:batch_kernels_auto_warmup_test PASSED in 1.4s //tensorflow/core/kernels:batch_kernels_env_test PASSED in 0.5s //tensorflow/core/kernels:batch_kernels_test PASSED in 34.8s //tensorflow/core/kernels:bias_op_test PASSED in 0.6s //tensorflow/core/kernels:bincount_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:broadcast_to_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:cast_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:checkpoint_callback_manager_test PASSED in 0.4s //tensorflow/core/kernels:clustering_ops_test PASSED in 0.4s //tensorflow/core/kernels:composite_tensor_variant_test PASSED in 0.4s //tensorflow/core/kernels:concat_op_test PASSED in 0.4s //tensorflow/core/kernels:constant_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:control_flow_ops_test PASSED in 7.9s //tensorflow/core/kernels:conv_grad_filter_ops_benchmark_test_cpu PASSED in 0.4s //tensorflow/core/kernels:conv_grad_input_ops_benchmark_test_cpu PASSED in 0.4s //tensorflow/core/kernels:conv_ops_benchmark_test_cpu PASSED in 0.4s //tensorflow/core/kernels:conv_ops_test_cpu PASSED in 4.8s //tensorflow/core/kernels:count_ops_test PASSED in 0.4s //tensorflow/core/kernels:cross_op_test PASSED in 0.5s //tensorflow/core/kernels:cwise_ops_test_cpu PASSED in 0.4s //tensorflow/core/kernels:debug_ops_test PASSED in 0.6s //tensorflow/core/kernels:decode_wav_op_test PASSED in 2.2s //tensorflow/core/kernels:deep_conv2d_test PASSED in 0.3s //tensorflow/core/kernels:dequantize_op_test PASSED in 0.5s //tensorflow/core/kernels:diag_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:dynamic_partition_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:dynamic_stitch_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:eigen_activations_test PASSED in 0.1s //tensorflow/core/kernels:eigen_attention_test PASSED in 0.1s //tensorflow/core/kernels:eigen_backward_cuboid_convolutions_test PASSED in 0.4s //tensorflow/core/kernels:eigen_backward_spatial_convolutions_test PASSED in 0.1s //tensorflow/core/kernels:eigen_benchmark_cpu_test PASSED in 0.1s //tensorflow/core/kernels:eigen_mkldnn_contraction_kernel_test PASSED in 0.1s //tensorflow/core/kernels:eigen_pooling_test PASSED in 0.3s //tensorflow/core/kernels:encode_wav_op_test PASSED in 2.1s //tensorflow/core/kernels:fingerprint_op_test PASSED in 0.4s //tensorflow/core/kernels:fused_batch_norm_ex_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:fused_batch_norm_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:gather_nd_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:gather_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:guarantee_const_op_test PASSED in 0.4s //tensorflow/core/kernels:identity_n_op_test PASSED in 0.4s //tensorflow/core/kernels:identity_op_test PASSED in 0.4s //tensorflow/core/kernels:immutable_constant_op_test PASSED in 0.7s //tensorflow/core/kernels:in_topk_op_test PASSED in 0.3s //tensorflow/core/kernels:isotonic_regression_op_test PASSED in 0.4s //tensorflow/core/kernels:logging_ops_test PASSED in 1.4s //tensorflow/core/kernels:lookup_ops_test PASSED in 0.4s //tensorflow/core/kernels:loss_test PASSED in 0.1s //tensorflow/core/kernels:lrn_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:merge_v2_checkpoints_op_test PASSED in 0.6s //tensorflow/core/kernels:mfcc_dct_test PASSED in 0.1s //tensorflow/core/kernels:mfcc_mel_filterbank_test PASSED in 0.1s //tensorflow/core/kernels:mfcc_op_test_cpu PASSED in 2.1s //tensorflow/core/kernels:mfcc_test PASSED in 0.1s //tensorflow/core/kernels:multinomial_op_test_cpu PASSED in 0.3s //tensorflow/core/kernels:nn_ops_test_cpu PASSED in 0.4s //tensorflow/core/kernels:one_hot_op_test PASSED in 0.3s //tensorflow/core/kernels:ops_testutil_test PASSED in 0.4s //tensorflow/core/kernels:ops_util_test PASSED in 0.1s //tensorflow/core/kernels:parameterized_truncated_normal_op_test_cpu PASSED in 0.3s //tensorflow/core/kernels:parse_tensor_test PASSED in 0.4s //tensorflow/core/kernels:quantization_utils_test PASSED in 0.5s //tensorflow/core/kernels:quantize_and_dequantize_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:quantize_down_and_shrink_range_op_test PASSED in 0.4s //tensorflow/core/kernels:quantize_op_test PASSED in 0.4s //tensorflow/core/kernels:quantized_activation_ops_test PASSED in 0.4s //tensorflow/core/kernels:quantized_add_op_test PASSED in 0.9s //tensorflow/core/kernels:quantized_batch_norm_op_test PASSED in 0.4s //tensorflow/core/kernels:quantized_bias_add_op_test PASSED in 0.4s //tensorflow/core/kernels:quantized_concat_op_test PASSED in 0.4s //tensorflow/core/kernels:quantized_conv_ops_test PASSED in 0.4s //tensorflow/core/kernels:quantized_instance_norm_test PASSED in 0.7s //tensorflow/core/kernels:quantized_matmul_op_test PASSED in 0.4s //tensorflow/core/kernels:quantized_mul_op_test PASSED in 0.9s //tensorflow/core/kernels:quantized_pooling_ops_test PASSED in 0.4s //tensorflow/core/kernels:quantized_reshape_op_test PASSED in 0.5s //tensorflow/core/kernels:quantized_resize_bilinear_op_test PASSED in 1.5s //tensorflow/core/kernels:ragged_fill_empty_rows_op_test PASSED in 0.4s //tensorflow/core/kernels:ragged_gather_op_test PASSED in 0.4s //tensorflow/core/kernels:ragged_range_op_test PASSED in 0.4s //tensorflow/core/kernels:ragged_tensor_from_variant_op_test PASSED in 0.4s //tensorflow/core/kernels:ragged_tensor_to_sparse_kernel_test PASSED in 0.4s //tensorflow/core/kernels:ragged_tensor_to_tensor_op_test PASSED in 0.4s //tensorflow/core/kernels:ragged_tensor_to_variant_op_test PASSED in 0.4s //tensorflow/core/kernels:random_binomial_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:random_index_shuffle_test PASSED in 0.2s //tensorflow/core/kernels:random_op_test_cpu PASSED in 0.3s //tensorflow/core/kernels:random_poisson_op_test_cpu PASSED in 0.3s //tensorflow/core/kernels:range_sampler_test PASSED in 0.1s //tensorflow/core/kernels:reduction_ops_test_cpu PASSED in 0.3s //tensorflow/core/kernels:regex_replace_op_test PASSED in 0.4s //tensorflow/core/kernels:requantization_range_op_test PASSED in 0.4s //tensorflow/core/kernels:requantize_op_test PASSED in 0.4s //tensorflow/core/kernels:resource_ops_test PASSED in 0.4s //tensorflow/core/kernels:restore_op_test PASSED in 0.4s //tensorflow/core/kernels:restore_v2_op_test PASSED in 0.4s //tensorflow/core/kernels:reverse_op_test PASSED in 0.4s //tensorflow/core/kernels:roll_op_test PASSED in 0.4s //tensorflow/core/kernels:save_op_test PASSED in 0.4s //tensorflow/core/kernels:save_v2_op_test PASSED in 0.4s //tensorflow/core/kernels:scan_ops_test_cpu PASSED in 0.3s //tensorflow/core/kernels:scatter_nd_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:scatter_op_test PASSED in 0.4s //tensorflow/core/kernels:scoped_allocator_ops_test_cpu PASSED in 6.5s //tensorflow/core/kernels:sdca_ops_test PASSED in 1.4s //tensorflow/core/kernels:segment_reduction_ops_test PASSED in 0.3s //tensorflow/core/kernels:sendrecv_ops_test PASSED in 0.3s //tensorflow/core/kernels:sequence_ops_test PASSED in 0.4s //tensorflow/core/kernels:shape_ops_test PASSED in 0.3s //tensorflow/core/kernels:slice_op_test PASSED in 0.4s //tensorflow/core/kernels:spacetobatch_benchmark_test_cpu PASSED in 0.3s //tensorflow/core/kernels:sparse_add_op_test PASSED in 0.4s //tensorflow/core/kernels:sparse_dense_binary_op_shared_test PASSED in 0.4s //tensorflow/core/kernels:sparse_fill_empty_rows_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:sparse_matmul_op_test_cpu PASSED in 0.3s //tensorflow/core/kernels:sparse_reduce_sum_op_test PASSED in 0.4s //tensorflow/core/kernels:sparse_tensor_dense_matmul_op_test_cpu PASSED in 0.3s //tensorflow/core/kernels:sparse_to_dense_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:sparse_utils_test PASSED in 0.3s //tensorflow/core/kernels:sparse_xent_op_test_cpu PASSED in 0.3s //tensorflow/core/kernels:spectrogram_op_test_cpu PASSED in 2.0s //tensorflow/core/kernels:spectrogram_test PASSED in 0.1s //tensorflow/core/kernels:split_op_test_cpu PASSED in 0.3s //tensorflow/core/kernels:split_v_op_test_cpu PASSED in 0.3s //tensorflow/core/kernels:strided_slice_op_test PASSED in 0.4s //tensorflow/core/kernels:string_format_op_test PASSED in 0.5s //tensorflow/core/kernels:string_ngrams_op_test PASSED in 0.4s //tensorflow/core/kernels:string_split_op_test PASSED in 0.4s //tensorflow/core/kernels:substr_op_test PASSED in 0.3s //tensorflow/core/kernels:summary_audio_op_test PASSED in 0.4s //tensorflow/core/kernels:summary_image_op_test PASSED in 0.4s //tensorflow/core/kernels:summary_op_test PASSED in 0.4s //tensorflow/core/kernels:summary_tensor_op_test PASSED in 0.4s //tensorflow/core/kernels:tensor_cord_test PASSED in 0.1s //tensorflow/core/kernels:tensor_flag_utils_test PASSED in 0.1s //tensorflow/core/kernels:tensor_map_test PASSED in 0.1s //tensorflow/core/kernels:training_ops_test PASSED in 0.4s //tensorflow/core/kernels:transpose_util_test PASSED in 0.3s //tensorflow/core/kernels:unary_ops_composition_test_cpu PASSED in 2.1s //tensorflow/core/kernels:unique_op_test PASSED in 0.4s //tensorflow/core/kernels:variable_ops_test PASSED in 1.6s //tensorflow/core/kernels:while_op_test PASSED in 0.7s //tensorflow/core/kernels:xent_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels/batching_util:basic_batch_scheduler_test PASSED in 0.1s //tensorflow/core/kernels/batching_util:batch_input_task_test PASSED in 0.4s //tensorflow/core/kernels/batching_util:batch_resource_base_test PASSED in 0.1s //tensorflow/core/kernels/batching_util:batch_scheduler_test PASSED in 0.1s //tensorflow/core/kernels/batching_util:batch_scheduler_utils_test PASSED in 0.1s //tensorflow/core/kernels/batching_util:bounded_executor_test PASSED in 20.1s //tensorflow/core/kernels/batching_util:input_split_metadata_test PASSED in 0.1s //tensorflow/core/kernels/batching_util:periodic_function_test PASSED in 2.1s //tensorflow/core/kernels/batching_util:serial_device_batch_scheduler_test PASSED in 1.9s //tensorflow/core/kernels/batching_util:shared_batch_scheduler_test PASSED in 30.7s //tensorflow/core/kernels/batching_util:threadsafe_status_test PASSED in 0.1s //tensorflow/core/kernels/data:batch_dataset_op_test PASSED in 1.0s //tensorflow/core/kernels/data:cache_dataset_ops_test PASSED in 0.7s //tensorflow/core/kernels/data:concatenate_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data:filter_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data:finalize_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:fixed_length_record_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:flat_map_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:get_options_op_test PASSED in 0.4s //tensorflow/core/kernels/data:interleave_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data:iterator_ops_test PASSED in 0.5s //tensorflow/core/kernels/data:map_dataset_op_test PASSED in 0.7s //tensorflow/core/kernels/data:map_defun_op_test PASSED in 0.4s //tensorflow/core/kernels/data:optimize_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:options_dataset_op_test PASSED in 0.4s //tensorflow/core/kernels/data:padded_batch_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data:parallel_batch_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data:parallel_filter_dataset_op_test PASSED in 0.7s //tensorflow/core/kernels/data:parallel_interleave_dataset_op_test PASSED in 0.7s //tensorflow/core/kernels/data:parallel_map_dataset_op_test PASSED in 0.8s //tensorflow/core/kernels/data:prefetch_autotuner_test PASSED in 0.3s //tensorflow/core/kernels/data:prefetch_dataset_op_test PASSED in 0.8s //tensorflow/core/kernels/data:range_dataset_op_test PASSED in 0.9s //tensorflow/core/kernels/data:reduce_dataset_op_test PASSED in 0.8s //tensorflow/core/kernels/data:repeat_dataset_op_test PASSED in 0.9s //tensorflow/core/kernels/data:rewrite_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:shard_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:shuffle_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:skip_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:sparse_tensor_slice_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:take_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:tensor_dataset_op_test PASSED in 0.4s //tensorflow/core/kernels/data:tensor_slice_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:text_line_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:tf_record_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:window_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data:zip_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data/experimental:assert_next_dataset_op_test PASSED in 0.4s //tensorflow/core/kernels/data/experimental:assert_prev_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data/experimental:auto_shard_dataset_op_test PASSED in 0.5s //tensorflow/core/kernels/data/experimental:directed_interleave_dataset_op_test PASSED in 0.4s //tensorflow/core/kernels/data/experimental:list_dataset_op_test PASSED in 0.4s //tensorflow/core/kernels/data/experimental:map_and_batch_dataset_op_test PASSED in 0.7s //tensorflow/core/kernels/data/experimental:parallel_interleave_dataset_op_test PASSED in 0.7s //tensorflow/core/kernels/data/experimental:random_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data/experimental:sampling_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data/experimental:save_dataset_op_test PASSED in 0.9s //tensorflow/core/kernels/data/experimental:unique_dataset_op_test PASSED in 0.4s //tensorflow/core/kernels/image:adjust_contrast_op_benchmark_test_cpu PASSED in 0.4s //tensorflow/core/kernels/image:adjust_contrast_op_test PASSED in 0.4s //tensorflow/core/kernels/image:colorspace_op_test PASSED in 0.4s //tensorflow/core/kernels/image:crop_and_resize_op_benchmark_test_cpu PASSED in 0.4s //tensorflow/core/kernels/image:crop_and_resize_op_test PASSED in 0.4s //tensorflow/core/kernels/image:encode_jpeg_op_test PASSED in 0.4s //tensorflow/core/kernels/image:mirror_pad_op_benchmark_test_cpu PASSED in 0.4s //tensorflow/core/kernels/image:mirror_pad_op_test PASSED in 0.5s //tensorflow/core/kernels/image:non_max_suppression_op_benchmark_test PASSED in 0.4s //tensorflow/core/kernels/image:non_max_suppression_op_test PASSED in 0.5s //tensorflow/core/kernels/image:resize_area_op_test PASSED in 0.8s //tensorflow/core/kernels/image:resize_benchmark_test_cpu PASSED in 0.4s //tensorflow/core/kernels/image:resize_ops_test_cpu PASSED in 2.0s //tensorflow/core/kernels/image:sampling_kernels_test PASSED in 0.5s //tensorflow/core/kernels/image:scale_and_translate_op_test PASSED in 1.6s //tensorflow/core/kernels/linalg:banded_triangular_solve_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels/linalg:matrix_triangular_solve_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels/mkl:mkl_conv_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_dequantize_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_fused_batch_norm_op_test PASSED in 0.2s //tensorflow/core/kernels/mkl:mkl_fused_ops_test PASSED in 0.6s //tensorflow/core/kernels/mkl:mkl_matmul_op_benchmark PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_qmatmul_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_quantize_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_quantized_concat_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_quantized_conv_ops_perchannel_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_quantized_conv_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_quantized_pooling_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_relu_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_requantize_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_sparse_matrix_matmul_op_benchmark PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_swish_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:onednn_nn_ops_benchmark PASSED in 0.1s //tensorflow/core/kernels/sparse:kernels_test PASSED in 0.4s //tensorflow/core/kernels/uniform_quant_ops:math_utils_test PASSED in 0.1s //tensorflow/core/kernels/uniform_quant_ops:tensor_utils_test PASSED in 0.1s //tensorflow/core/kernels/uniform_quant_ops:uniform_dequantize_op_test PASSED in 0.4s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantize_op_test PASSED in 0.4s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_add_op_test PASSED in 0.4s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_clip_by_value_op_test PASSED in 0.4s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_convolution_ops_test PASSED in 0.5s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_dot_ops_test PASSED in 0.6s //tensorflow/core/kernels/uniform_quant_ops:uniform_requantize_op_test PASSED in 0.5s //tensorflow/core/lib/db:sqlite_test PASSED in 0.2s //tensorflow/core/lib/gif:lib_gif_io_test PASSED in 3.5s //tensorflow/core/lib/jpeg:lib_jpeg_jpeg_mem_unittest PASSED in 0.5s //tensorflow/core/ops:cudnn_rnn_ops_test_cc PASSED in 0.5s //tensorflow/core/ops:ops_array_grad_test PASSED in 1.2s //tensorflow/core/ops:ops_math_grad_test PASSED in 3.4s //tensorflow/core/ops:ops_tests PASSED in 0.5s //tensorflow/core/ops/compat:backwards_compatibility_test PASSED in 0.4s //tensorflow/core/platform:enable_tf2_utils_test PASSED in 0.1s //tensorflow/core/platform:env_test PASSED in 2.3s //tensorflow/core/platform:fake_python_env_test PASSED in 0.1s //tensorflow/core/platform:file_system_test PASSED in 0.1s //tensorflow/core/platform:platform_strings_test PASSED in 0.1s //tensorflow/core/platform:ram_file_system_test PASSED in 11.0s //tensorflow/core/platform:resource_loader_test PASSED in 0.1s //tensorflow/core/platform:vmodule_benchmark_test PASSED in 0.1s //tensorflow/core/platform:vmodule_test PASSED in 0.2s //tensorflow/core/profiler/convert:dcn_analysis_test PASSED in 0.1s //tensorflow/core/profiler/convert:dcn_utils_test PASSED in 0.1s //tensorflow/core/profiler/convert:hlo_proto_to_graph_view_test PASSED in 0.1s //tensorflow/core/profiler/convert:hlo_proto_to_memory_visualization_utils_test PASSED in 0.1s //tensorflow/core/profiler/convert:op_stats_combiner_test PASSED in 0.1s //tensorflow/core/profiler/convert:op_stats_to_pod_stats_test PASSED in 0.1s //tensorflow/core/profiler/convert:op_stats_to_pod_viewer_test PASSED in 0.1s //tensorflow/core/profiler/convert:op_stats_to_tf_stats_test PASSED in 0.1s //tensorflow/core/profiler/convert:repository_test PASSED in 0.1s //tensorflow/core/profiler/convert:xplane_to_dcn_collective_stats_test PASSED in 0.1s //tensorflow/core/profiler/convert:xplane_to_kernel_stats_db_test PASSED in 0.1s //tensorflow/core/profiler/convert:xplane_to_memory_profile_test PASSED in 0.1s //tensorflow/core/profiler/convert:xplane_to_op_metrics_db_test PASSED in 0.1s //tensorflow/core/profiler/convert:xplane_to_op_stats_test PASSED in 0.2s //tensorflow/core/profiler/convert:xplane_to_step_events_test PASSED in 0.1s //tensorflow/core/profiler/convert:xplane_to_tf_functions_test PASSED in 0.1s //tensorflow/core/profiler/convert:xplane_to_tool_names_test PASSED in 0.1s //tensorflow/core/profiler/convert/trace_viewer:trace_viewer_visibility_test PASSED in 0.1s //tensorflow/core/profiler/internal:tfprof_show_test PASSED in 0.4s //tensorflow/core/profiler/internal:tfprof_stats_test PASSED in 0.5s //tensorflow/core/profiler/internal:tfprof_tensor_test PASSED in 0.4s //tensorflow/core/profiler/internal:tfprof_timeline_test PASSED in 0.4s //tensorflow/core/profiler/internal/advisor:tfprof_advisor_test PASSED in 0.4s //tensorflow/core/profiler/lib:profiler_disabled_test PASSED in 0.1s //tensorflow/core/profiler/utils:derived_timeline_test PASSED in 0.1s //tensorflow/core/profiler/utils:kernel_stats_utils_test PASSED in 0.1s //tensorflow/core/profiler/utils:op_metrics_db_utils_test PASSED in 0.1s //tensorflow/core/profiler/utils:step_intersection_test PASSED in 0.1s //tensorflow/core/runtime_fallback/util:type_util_test PASSED in 0.1s //tensorflow/core/summary:schema_test PASSED in 0.1s //tensorflow/core/summary:summary_db_writer_test PASSED in 0.1s //tensorflow/core/summary:summary_file_writer_test PASSED in 0.1s //tensorflow/core/tfrt/common:pjrt_cpu_client_registration_test PASSED in 6.2s //tensorflow/core/tfrt/common:pjrt_state_test PASSED in 6.4s //tensorflow/core/tfrt/common:pjrt_util_test PASSED in 6.4s //tensorflow/core/tfrt/fallback:cost_recorder_test PASSED in 0.1s //tensorflow/core/tfrt/fallback:fallback_state_test PASSED in 0.3s //tensorflow/core/tfrt/graph_executor:config_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/attribute:attribute_test PASSED in 0.2s //tensorflow/core/tfrt/mlrt/bytecode:bytecode_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/bytecode:executable_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/bytecode:function_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/bytecode:kernel_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/bytecode:span_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/interpreter:context_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/interpreter:future_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/interpreter:interpreter_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/interpreter:register_span_test PASSED in 0.1s //tensorflow/core/tfrt/mlrt/interpreter:value_test PASSED in 0.1s //tensorflow/core/tfrt/run_handler_thread_pool:run_handler_concurrent_work_queue_test PASSED in 0.2s //tensorflow/core/tfrt/run_handler_thread_pool:run_handler_test PASSED in 0.9s //tensorflow/core/tfrt/run_handler_thread_pool:run_handler_util_test PASSED in 0.1s //tensorflow/core/tfrt/runtime:tf_threadpool_concurrent_work_queue_test PASSED in 0.1s //tensorflow/core/tfrt/runtime:work_queue_interface_test PASSED in 0.1s //tensorflow/core/tfrt/utils:graph_partition_test PASSED in 1.9s //tensorflow/core/transforms:eval_utils_test PASSED in 1.3s //tensorflow/core/transforms:graph_transform_wrapper_test PASSED in 0.2s //tensorflow/core/util:bcast_test PASSED in 0.7s //tensorflow/core/util:command_line_flags_test PASSED in 0.7s //tensorflow/core/util:debug_data_dumper_test PASSED in 0.6s //tensorflow/core/util:debug_events_writer_test PASSED in 0.1s //tensorflow/core/util:dump_graph_test PASSED in 0.7s //tensorflow/core/util:equal_graph_def_test PASSED in 0.7s //tensorflow/core/util:events_writer_test PASSED in 2.6s //tensorflow/core/util:example_proto_fast_parsing_test PASSED in 0.9s //tensorflow/core/util:example_proto_helper_test PASSED in 0.6s //tensorflow/core/util:exec_on_stall_test PASSED in 2.1s //tensorflow/core/util:fake_clock_env_test PASSED in 1.8s //tensorflow/core/util:incremental_barrier_test PASSED in 0.1s //tensorflow/core/util:matmul_bcast_test PASSED in 0.7s //tensorflow/core/util:memmapped_file_system_test PASSED in 0.6s //tensorflow/core/util:mkl_heuristics_test PASSED in 0.1s //tensorflow/core/util:overflow_test PASSED in 0.1s //tensorflow/core/util:presized_cuckoo_map_test PASSED in 1.8s //tensorflow/core/util:ragged_to_dense_util_test PASSED in 0.3s //tensorflow/core/util:reffed_status_callback_test PASSED in 0.6s //tensorflow/core/util:reporter_test PASSED in 0.7s //tensorflow/core/util:saved_tensor_slice_util_test PASSED in 0.7s //tensorflow/core/util:semver_test PASSED in 0.7s //tensorflow/core/util:stat_summarizer_test PASSED in 0.7s //tensorflow/core/util:strided_slice_op_test PASSED in 0.7s //tensorflow/core/util:tensor_format_test PASSED in 0.7s //tensorflow/core/util:tensor_slice_reader_test PASSED in 0.9s //tensorflow/core/util:tensor_slice_set_test PASSED in 0.7s //tensorflow/core/util:tensor_slice_util_test PASSED in 0.7s //tensorflow/core/util:tensor_slice_writer_test PASSED in 1.3s //tensorflow/core/util:work_sharder_test PASSED in 0.9s //tensorflow/core/util/ctc:ctc_beam_search_test PASSED in 0.1s //tensorflow/core/util/proto:descriptor_pool_registry_test PASSED in 0.5s //tensorflow/core/util/proto:proto_utils_test PASSED in 0.4s //tensorflow/core/util/quantization:uniform_quant_ops_params_test PASSED in 0.1s //tensorflow/core/util/sparse:sparse_tensor_test PASSED in 0.1s //tensorflow/core/util/tensor_bundle:tensor_bundle_test PASSED in 16.4s //tensorflow/dtensor/mlir:dtensor_location_test PASSED in 0.1s //tensorflow/dtensor/mlir/tests:annotate_global_shape.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:cluster_function_conversion.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:constant_folding.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:decompose_controlflow.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:designate_resource_handle_mesh.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:device_mesh_cluster_coarsening.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_all_gather.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_all_scatter.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_combine_optimization.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_lowering.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_scatter_optimization.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_sum_optimization.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_alltoall_lowering.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_collective_type_lowering.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_layout_must_execute.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_layout_to_xla_sharding_op.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_mixed_precision_reduce.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_reduce_scatter_lowering.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_remove_dtensorlayout.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_replace_auxiliary_layout_op.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_replace_relayout_with_identity.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_set_hlo_sharding.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_set_hlo_sharding_default.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_xla_spmd_integration.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:elide_identity_before_copy_to_mesh.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:function_renaming.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:handle_cross_cluster_dependencies.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:handle_sparsetensors.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:layout_propagation_v2.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:lower_send_recv.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:merge_clusters.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:mesh_propagation.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:multi_device_expansion.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:op_to_device_cluster.mlir.test PASSED in 0.9s //tensorflow/dtensor/mlir/tests:propagate_default_layout.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:propagate_device_id_to_function.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:restore_and_assign.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:restore_shape_inference.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:set_default_sharding.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:sparse_expansion.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_batchparallel.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_concat.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_conv.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_einsum.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_expansion.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:spmd_fft.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_io_ops.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_iterator.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_matmul.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_random.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_save_restore.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_segment_sum.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_slice.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_softmax_loss.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_squeeze.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_var_handle.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:tf_dtensor_ops.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:tpu_add_resource_device_attribute.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:tpu_integration.mlir.test PASSED in 0.9s //tensorflow/dtensor/mlir/tests:undo_merge_const_across_mesh.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:update_tpu_metadata.mlir.test PASSED in 0.7s //tensorflow/dtensor/python/tests:api_test PASSED in 27.0s //tensorflow/dtensor/python/tests:array_ops_test_cpu PASSED in 22.4s //tensorflow/dtensor/python/tests:cache_test_cpu PASSED in 17.8s //tensorflow/dtensor/python/tests:collective_combine_all_reduce_test_cpu PASSED in 19.1s //tensorflow/dtensor/python/tests:collective_test_cpu PASSED in 20.2s //tensorflow/dtensor/python/tests:config_test_cpu PASSED in 10.0s //tensorflow/dtensor/python/tests:device_test_cpu PASSED in 52.7s //tensorflow/dtensor/python/tests:layout_test_cpu PASSED in 24.5s //tensorflow/dtensor/python/tests:mesh_util_test_cpu PASSED in 19.9s //tensorflow/dtensor/python/tests:multi_client_test_cpu PASSED in 14.5s //tensorflow/dtensor/python/tests:numpy_util_test_cpu PASSED in 10.6s //tensorflow/dtensor/python/tests:variable_test_cpu PASSED in 14.1s //tensorflow/dtensor/tests:dtensor_operation_test PASSED in 23.3s //tensorflow/dtensor/tests:executable_manager_test PASSED in 23.3s //tensorflow/dtensor/tests:layout_to_xla_sharding_test PASSED in 0.1s //tensorflow/dtensor/tests:slice_util_test PASSED in 0.1s //tensorflow/dtensor/tests:spmd_expander_test PASSED in 5.4s //tensorflow/dtensor/tests:tensor_layout_test PASSED in 0.1s //tensorflow/examples/adding_an_op:fact_test PASSED in 23.0s //tensorflow/examples/adding_an_op:zero_out_1_test PASSED in 24.1s //tensorflow/examples/adding_an_op:zero_out_2_test PASSED in 28.4s //tensorflow/examples/adding_an_op:zero_out_3_test PASSED in 31.0s //tensorflow/examples/custom_ops_doc/multiplex_1:multiplex_1_test PASSED in 31.2s //tensorflow/examples/custom_ops_doc/multiplex_2:multiplex_2_test_cpu PASSED in 111.5s //tensorflow/examples/custom_ops_doc/multiplex_3:multiplex_3_test PASSED in 35.6s //tensorflow/examples/custom_ops_doc/multiplex_4:multiplex_4_test PASSED in 80.3s //tensorflow/examples/custom_ops_doc/simple_hash_table:simple_hash_table_test PASSED in 40.8s //tensorflow/examples/custom_ops_doc/sleep:sleep_test PASSED in 85.6s //tensorflow/examples/speech_commands:accuracy_utils_test PASSED in 1.8s //tensorflow/examples/speech_commands:models_test PASSED in 81.7s //tensorflow/examples/speech_commands:recognize_commands_test PASSED in 2.0s //tensorflow/examples/wav_to_spectrogram:wav_to_spectrogram_test PASSED in 2.2s //tensorflow/js:ts_op_gen_test PASSED in 0.1s //tensorflow/python/autograph/converters:asserts_test PASSED in 9.9s //tensorflow/python/autograph/converters:break_statements_test PASSED in 11.1s //tensorflow/python/autograph/converters:call_trees_test PASSED in 10.3s //tensorflow/python/autograph/converters:conditional_expressions_test PASSED in 9.8s //tensorflow/python/autograph/converters:continue_statements_test PASSED in 11.4s //tensorflow/python/autograph/converters:control_flow_test PASSED in 17.2s //tensorflow/python/autograph/converters:directives_test PASSED in 9.5s //tensorflow/python/autograph/converters:functions_test PASSED in 10.0s //tensorflow/python/autograph/converters:lists_test PASSED in 10.0s //tensorflow/python/autograph/converters:logical_expressions_test PASSED in 10.1s //tensorflow/python/autograph/converters:return_statements_test PASSED in 12.3s //tensorflow/python/autograph/converters:slices_test PASSED in 10.4s //tensorflow/python/autograph/converters:variables_test PASSED in 10.3s //tensorflow/python/autograph/core:converter_test PASSED in 9.9s //tensorflow/python/autograph/core:function_wrappers_test PASSED in 9.5s //tensorflow/python/autograph/impl:api_test PASSED in 18.1s //tensorflow/python/autograph/impl:conversion_test PASSED in 32.4s //tensorflow/python/autograph/lang:special_functions_test PASSED in 11.2s //tensorflow/python/autograph/operators:conditional_expressions_test PASSED in 10.4s //tensorflow/python/autograph/operators:control_flow_test PASSED in 18.5s //tensorflow/python/autograph/operators:data_structures_test PASSED in 9.9s //tensorflow/python/autograph/operators:exceptions_test PASSED in 9.1s //tensorflow/python/autograph/operators:logical_test PASSED in 10.0s //tensorflow/python/autograph/operators:py_builtins_test PASSED in 19.6s //tensorflow/python/autograph/operators:slices_test PASSED in 10.0s //tensorflow/python/autograph/operators:variables_test PASSED in 10.0s //tensorflow/python/autograph/pyct:anno_test PASSED in 8.9s //tensorflow/python/autograph/pyct:ast_util_test PASSED in 9.3s //tensorflow/python/autograph/pyct:cache_test PASSED in 9.8s //tensorflow/python/autograph/pyct:cfg_test PASSED in 10.8s //tensorflow/python/autograph/pyct:error_utils_test PASSED in 10.1s //tensorflow/python/autograph/pyct:inspect_utils_test PASSED in 10.9s //tensorflow/python/autograph/pyct:loader_test PASSED in 10.8s //tensorflow/python/autograph/pyct:naming_test PASSED in 9.1s //tensorflow/python/autograph/pyct:origin_info_test PASSED in 9.7s //tensorflow/python/autograph/pyct:parser_test PASSED in 10.4s //tensorflow/python/autograph/pyct:pretty_printer_test PASSED in 9.7s //tensorflow/python/autograph/pyct:qual_names_test PASSED in 9.2s //tensorflow/python/autograph/pyct:templates_test PASSED in 9.5s //tensorflow/python/autograph/pyct:transformer_test PASSED in 8.9s //tensorflow/python/autograph/pyct:transpiler_test PASSED in 10.6s //tensorflow/python/autograph/pyct/static_analysis:activity_test PASSED in 9.8s //tensorflow/python/autograph/pyct/static_analysis:liveness_test PASSED in 9.3s //tensorflow/python/autograph/pyct/static_analysis:reaching_definitions_test PASSED in 9.7s //tensorflow/python/autograph/pyct/static_analysis:reaching_fndefs_test PASSED in 9.4s //tensorflow/python/autograph/pyct/static_analysis:type_inference_test PASSED in 10.7s //tensorflow/python/autograph/tests:assertion_test PASSED in 114.6s //tensorflow/python/autograph/tests:basic_ifexp_test PASSED in 120.7s //tensorflow/python/autograph/tests:call_to_builtin_function_test PASSED in 107.4s //tensorflow/python/autograph/tests:call_to_lambda_function_test PASSED in 95.0s //tensorflow/python/autograph/tests:call_to_named_tuple_test PASSED in 118.8s //tensorflow/python/autograph/tests:call_to_numpy_function_test PASSED in 93.5s //tensorflow/python/autograph/tests:call_to_print_function_test PASSED in 108.9s //tensorflow/python/autograph/tests:call_to_tf_api_test PASSED in 105.4s //tensorflow/python/autograph/tests:call_to_user_function_test PASSED in 112.1s //tensorflow/python/autograph/tests:composite_names_in_control_flow_test PASSED in 113.4s //tensorflow/python/autograph/tests:cond_basic_test PASSED in 117.1s //tensorflow/python/autograph/tests:datasets_test PASSED in 122.6s //tensorflow/python/autograph/tests:early_return_test PASSED in 117.9s //tensorflow/python/autograph/tests:ext_slice_test PASSED in 112.4s //tensorflow/python/autograph/tests:generator_test PASSED in 113.7s //tensorflow/python/autograph/tests:logical_expression_test PASSED in 94.6s //tensorflow/python/autograph/tests:loop_basic_test PASSED in 221.7s //tensorflow/python/autograph/tests:loop_control_flow_illegal_cases_test PASSED in 96.4s //tensorflow/python/autograph/tests:loop_created_variables_test PASSED in 135.4s //tensorflow/python/autograph/tests:loop_scoping_test PASSED in 152.5s //tensorflow/python/autograph/tests:loop_with_function_call_test PASSED in 137.8s //tensorflow/python/autograph/tests:loop_with_variable_type_illegal_cases_test PASSED in 100.6s //tensorflow/python/autograph/tests:loop_with_variable_type_test PASSED in 123.8s //tensorflow/python/autograph/tests:nested_control_flow_test PASSED in 134.8s //tensorflow/python/autograph/tests:type_annotations_test PASSED in 77.8s //tensorflow/python/autograph/utils:context_managers_test PASSED in 9.3s //tensorflow/python/autograph/utils:misc_test PASSED in 9.9s //tensorflow/python/autograph/utils:tensor_list_test PASSED in 10.3s //tensorflow/python/autograph/utils:tensors_test PASSED in 10.4s //tensorflow/python/checkpoint:checkpoint_management_test_cpu PASSED in 49.0s //tensorflow/python/checkpoint:checkpoint_metrics_test PASSED in 40.1s //tensorflow/python/checkpoint:checkpoint_test PASSED in 71.3s //tensorflow/python/checkpoint:checkpoint_view_test PASSED in 24.9s //tensorflow/python/checkpoint:checkpoint_with_v1_optimizers_test PASSED in 54.4s //tensorflow/python/checkpoint:functional_saver_test_cpu PASSED in 37.8s //tensorflow/python/checkpoint:restore_test PASSED in 33.1s //tensorflow/python/checkpoint:save_util_v1_test PASSED in 32.3s //tensorflow/python/checkpoint:saveable_compat_test PASSED in 35.6s //tensorflow/python/checkpoint:tensor_callable_test PASSED in 49.2s //tensorflow/python/checkpoint:trackable_view_test PASSED in 33.9s //tensorflow/python/checkpoint/sharding:sharding_policies_test PASSED in 50.1s //tensorflow/python/checkpoint/sharding:sharding_util_test PASSED in 36.0s //tensorflow/python/client:device_lib_test_cpu PASSED in 25.2s //tensorflow/python/client:events_writer_test PASSED in 30.4s //tensorflow/python/client:session_list_devices_test PASSED in 28.1s //tensorflow/python/client:session_partial_run_test PASSED in 42.3s //tensorflow/python/client:timeline_test_cpu PASSED in 23.2s //tensorflow/python/client:virtual_gpu_test_cpu PASSED in 26.0s //tensorflow/python/compat:compat_test PASSED in 34.7s //tensorflow/python/compat:disable_v2_behavior_test PASSED in 28.5s //tensorflow/python/compiler/mlir:mlir_test PASSED in 8.4s //tensorflow/python/compiler/tensorrt/test:batch_matmul_test_cpu PASSED in 31.1s //tensorflow/python/compiler/tensorrt/test:biasadd_matmul_test_cpu PASSED in 34.7s //tensorflow/python/compiler/tensorrt/test:bool_test_cpu PASSED in 58.3s //tensorflow/python/compiler/tensorrt/test:cast_test_cpu PASSED in 12.9s //tensorflow/python/compiler/tensorrt/test:concatenation_test_cpu PASSED in 12.7s //tensorflow/python/compiler/tensorrt/test:const_broadcast_test_cpu PASSED in 12.3s //tensorflow/python/compiler/tensorrt/test:data_dependent_shape_test_cpu PASSED in 13.7s //tensorflow/python/compiler/tensorrt/test:dynamic_input_shapes_test_cpu PASSED in 13.3s //tensorflow/python/compiler/tensorrt/test:identity_output_test_cpu PASSED in 11.5s //tensorflow/python/compiler/tensorrt/test:int32_test_cpu PASSED in 13.3s //tensorflow/python/compiler/tensorrt/test:lru_cache_test_cpu PASSED in 12.7s //tensorflow/python/compiler/tensorrt/test:multi_connection_neighbor_engine_test_cpu PASSED in 13.0s //tensorflow/python/compiler/tensorrt/test:neighboring_engine_test_cpu PASSED in 17.6s //tensorflow/python/compiler/tensorrt/test:quantization_test_cpu PASSED in 16.3s //tensorflow/python/compiler/tensorrt/test:rank_two_test_cpu PASSED in 15.8s //tensorflow/python/compiler/tensorrt/test:reshape_transpose_test_cpu PASSED in 35.1s //tensorflow/python/compiler/tensorrt/test:topk_test_cpu PASSED in 77.1s //tensorflow/python/compiler/tensorrt/test:trt_engine_op_shape_test_cpu PASSED in 41.8s //tensorflow/python/compiler/tensorrt/test:trt_mode_test_cpu PASSED in 39.4s //tensorflow/python/compiler/tensorrt/test:unary_test_cpu PASSED in 25.3s //tensorflow/python/compiler/tensorrt/test:vgg_block_nchw_test_cpu PASSED in 23.9s //tensorflow/python/compiler/tensorrt/test:vgg_block_test_cpu PASSED in 20.9s //tensorflow/python/compiler/xla:jit_compile_test_cpu PASSED in 28.5s //tensorflow/python/compiler/xla:jit_test_cpu PASSED in 48.3s //tensorflow/python/compiler/xla:xla_test_cpu PASSED in 59.0s //tensorflow/python/compiler/xla/experimental:xla_sharding_test PASSED in 10.3s //tensorflow/python/data/experimental/kernel_tests:assert_cardinality_test PASSED in 33.6s //tensorflow/python/data/experimental/kernel_tests:assert_next_test PASSED in 12.8s //tensorflow/python/data/experimental/kernel_tests:assert_prev_test PASSED in 11.6s //tensorflow/python/data/experimental/kernel_tests:compression_ops_test PASSED in 17.6s //tensorflow/python/data/experimental/kernel_tests:copy_to_device_test_cpu PASSED in 17.7s //tensorflow/python/data/experimental/kernel_tests:dense_to_sparse_batch_test PASSED in 24.5s //tensorflow/python/data/experimental/kernel_tests:io_test PASSED in 55.7s //tensorflow/python/data/experimental/kernel_tests:iterator_ops_test PASSED in 13.1s //tensorflow/python/data/experimental/kernel_tests:lookup_ops_test PASSED in 56.5s //tensorflow/python/data/experimental/kernel_tests:make_csv_dataset_test PASSED in 31.4s //tensorflow/python/data/experimental/kernel_tests:make_saveable_from_iterator_test PASSED in 38.1s //tensorflow/python/data/experimental/kernel_tests:make_tf_record_dataset_test PASSED in 58.7s //tensorflow/python/data/experimental/kernel_tests:map_defun_op_test PASSED in 10.9s //tensorflow/python/data/experimental/kernel_tests:matching_files_dataset_test PASSED in 18.6s //tensorflow/python/data/experimental/kernel_tests:model_dataset_test PASSED in 12.2s //tensorflow/python/data/experimental/kernel_tests:non_serializable_test PASSED in 12.6s //tensorflow/python/data/experimental/kernel_tests:pad_to_cardinality_test PASSED in 14.0s //tensorflow/python/data/experimental/kernel_tests:prefetch_to_device_test_cpu PASSED in 15.5s //tensorflow/python/data/experimental/kernel_tests:prefetch_with_slack_test PASSED in 14.2s //tensorflow/python/data/experimental/kernel_tests:shuffle_and_repeat_test PASSED in 26.6s //tensorflow/python/data/experimental/kernel_tests:sleep_test PASSED in 10.9s //tensorflow/python/data/experimental/kernel_tests:tf_record_writer_test PASSED in 13.9s //tensorflow/python/data/experimental/kernel_tests:variant_test PASSED in 12.7s //tensorflow/python/data/experimental/kernel_tests:weighted_flat_map_test PASSED in 193.6s //tensorflow/python/data/experimental/kernel_tests:wrap_unwrap_test_cpu PASSED in 11.4s //tensorflow/python/data/experimental/kernel_tests/optimization:filter_fusion_test PASSED in 38.6s //tensorflow/python/data/experimental/kernel_tests/optimization:filter_parallelization_test PASSED in 201.6s //tensorflow/python/data/experimental/kernel_tests/optimization:grappler_test_cpu PASSED in 12.4s //tensorflow/python/data/experimental/kernel_tests/optimization:make_deterministic_test PASSED in 31.8s //tensorflow/python/data/experimental/kernel_tests/optimization:map_and_batch_fusion_test PASSED in 12.3s //tensorflow/python/data/experimental/kernel_tests/optimization:map_and_filter_fusion_test PASSED in 23.6s //tensorflow/python/data/experimental/kernel_tests/optimization:map_fusion_test PASSED in 157.4s //tensorflow/python/data/experimental/kernel_tests/optimization:map_parallelization_test PASSED in 15.8s //tensorflow/python/data/experimental/kernel_tests/optimization:noop_elimination_test PASSED in 17.6s //tensorflow/python/data/experimental/kernel_tests/optimization:seq_interleave_prefetch_test PASSED in 18.9s //tensorflow/python/data/experimental/kernel_tests/service:multi_device_test PASSED in 19.4s //tensorflow/python/data/experimental/service:server_lib_test PASSED in 33.8s //tensorflow/python/data/kernel_tests:as_numpy_iterator_test PASSED in 12.1s //tensorflow/python/data/kernel_tests:bucket_by_sequence_length_test PASSED in 22.5s //tensorflow/python/data/kernel_tests:cache_test PASSED in 395.4s //tensorflow/python/data/kernel_tests:cardinality_test PASSED in 16.3s //tensorflow/python/data/kernel_tests:checkpoint_test PASSED in 19.5s //tensorflow/python/data/kernel_tests:concatenate_test PASSED in 139.0s //tensorflow/python/data/kernel_tests:counter_test PASSED in 188.4s //tensorflow/python/data/kernel_tests:dataset_spec_test PASSED in 11.2s //tensorflow/python/data/kernel_tests:dataset_test PASSED in 30.1s //tensorflow/python/data/kernel_tests:enumerate_test PASSED in 205.7s //tensorflow/python/data/kernel_tests:fingerprint_test PASSED in 51.4s //tensorflow/python/data/kernel_tests:from_sparse_tensor_slices_test PASSED in 44.4s //tensorflow/python/data/kernel_tests:get_single_element_test PASSED in 14.1s //tensorflow/python/data/kernel_tests:ignore_errors_test PASSED in 81.2s //tensorflow/python/data/kernel_tests:io_test PASSED in 177.8s //tensorflow/python/data/kernel_tests:iterator_test_cpu PASSED in 25.2s //tensorflow/python/data/kernel_tests:len_test PASSED in 10.9s //tensorflow/python/data/kernel_tests:optional_test_cpu PASSED in 14.8s //tensorflow/python/data/kernel_tests:options_test PASSED in 14.1s //tensorflow/python/data/kernel_tests:placement_test_cpu PASSED in 13.5s //tensorflow/python/data/kernel_tests:prefetch_test PASSED in 109.5s //tensorflow/python/data/kernel_tests:random_test PASSED in 68.9s //tensorflow/python/data/kernel_tests:range_test PASSED in 103.7s //tensorflow/python/data/kernel_tests:rebatch_test PASSED in 82.1s //tensorflow/python/data/kernel_tests:reduce_test_cpu PASSED in 30.2s //tensorflow/python/data/kernel_tests:scan_test_cpu PASSED in 132.6s //tensorflow/python/data/kernel_tests:sparse_batch_test PASSED in 60.3s //tensorflow/python/data/kernel_tests:unbatch_test PASSED in 38.8s //tensorflow/python/data/util:convert_test PASSED in 12.8s //tensorflow/python/data/util:nest_test PASSED in 12.2s //tensorflow/python/data/util:options_test PASSED in 13.9s //tensorflow/python/data/util:random_seed_test PASSED in 19.0s //tensorflow/python/data/util:sparse_test PASSED in 11.1s //tensorflow/python/data/util:structure_test PASSED in 11.7s //tensorflow/python/data/util:traverse_test PASSED in 10.8s //tensorflow/python/debug/cli:analyzer_cli_test_cpu PASSED in 29.3s //tensorflow/python/debug/cli:cli_config_test PASSED in 10.3s //tensorflow/python/debug/cli:cli_shared_test PASSED in 17.5s //tensorflow/python/debug/cli:command_parser_test PASSED in 9.0s //tensorflow/python/debug/cli:debugger_cli_common_test PASSED in 9.3s //tensorflow/python/debug/cli:evaluator_test PASSED in 8.4s //tensorflow/python/debug/cli:profile_analyzer_cli_test PASSED in 8.0s //tensorflow/python/debug/cli:readline_ui_test PASSED in 9.0s //tensorflow/python/debug/cli:tensor_format_test PASSED in 9.3s //tensorflow/python/debug/lib:check_numerics_callback_test_cpu PASSED in 35.5s //tensorflow/python/debug/lib:common_test PASSED in 7.7s //tensorflow/python/debug/lib:debug_data_test PASSED in 7.9s //tensorflow/python/debug/lib:debug_events_monitors_test PASSED in 8.9s //tensorflow/python/debug/lib:debug_events_writer_test PASSED in 8.8s //tensorflow/python/debug/lib:debug_gradients_test_cpu PASSED in 16.9s //tensorflow/python/debug/lib:debug_graph_reconstruction_test_cpu PASSED in 34.2s //tensorflow/python/debug/lib:debug_graphs_test PASSED in 8.0s //tensorflow/python/debug/lib:debug_grappler_test_cpu PASSED in 18.8s //tensorflow/python/debug/lib:debug_utils_test PASSED in 7.9s //tensorflow/python/debug/lib:debug_v2_ops_test_cpu PASSED in 42.0s //tensorflow/python/debug/lib:profiling_test PASSED in 7.7s //tensorflow/python/debug/lib:session_debug_file_test_cpu PASSED in 311.2s //tensorflow/python/debug/lib:session_debug_multi_gpu_test_cpu PASSED in 30.3s //tensorflow/python/debug/lib:source_utils_test PASSED in 10.5s //tensorflow/python/debug/wrappers:disk_usage_test PASSED in 9.1s //tensorflow/python/debug/wrappers:dumping_wrapper_test PASSED in 8.8s //tensorflow/python/debug/wrappers:framework_test PASSED in 8.7s //tensorflow/python/debug/wrappers:local_cli_wrapper_test PASSED in 9.6s //tensorflow/python/distribute:checkpoint_utils_test_2gpu PASSED in 38.7s //tensorflow/python/distribute:checkpoint_utils_test_cpu PASSED in 36.7s //tensorflow/python/distribute:checkpointing_test_2gpu PASSED in 50.4s //tensorflow/python/distribute:checkpointing_test_cpu PASSED in 34.8s //tensorflow/python/distribute:collective_util_test PASSED in 15.2s //tensorflow/python/distribute:combinations_test_2gpu PASSED in 49.9s //tensorflow/python/distribute:combinations_test_cpu PASSED in 39.4s //tensorflow/python/distribute:cross_device_utils_test_cpu PASSED in 14.1s //tensorflow/python/distribute:custom_training_loop_gradient_test_2gpu PASSED in 15.7s //tensorflow/python/distribute:custom_training_loop_gradient_test_cpu PASSED in 14.8s //tensorflow/python/distribute:device_util_test_cpu PASSED in 14.1s //tensorflow/python/distribute:distribute_coordinator_test PASSED in 15.8s //tensorflow/python/distribute:distribute_lib_test PASSED in 17.9s //tensorflow/python/distribute:distribute_utils_test_2gpu PASSED in 12.7s //tensorflow/python/distribute:distribute_utils_test_cpu PASSED in 13.4s //tensorflow/python/distribute:input_ops_test_cpu PASSED in 14.0s //tensorflow/python/distribute:metrics_v1_test_2gpu PASSED in 29.4s //tensorflow/python/distribute:metrics_v1_test_cpu PASSED in 42.7s //tensorflow/python/distribute:mirrored_values_test_2gpu PASSED in 12.4s //tensorflow/python/distribute:mirrored_values_test_cpu PASSED in 12.8s //tensorflow/python/distribute:mirrored_variable_test_2gpu PASSED in 63.0s //tensorflow/python/distribute:mirrored_variable_test_cpu PASSED in 44.8s //tensorflow/python/distribute:multi_process_runner_no_init_test PASSED in 10.7s //tensorflow/python/distribute:multi_worker_continuous_run_test_cpu PASSED in 92.5s //tensorflow/python/distribute:multi_worker_util_test PASSED in 9.7s //tensorflow/python/distribute:mwms_pjrt_gpu_test_2gpu PASSED in 36.2s //tensorflow/python/distribute:mwms_pjrt_gpu_test_cpu PASSED in 44.1s //tensorflow/python/distribute:numpy_dataset_test PASSED in 24.7s //tensorflow/python/distribute:one_device_strategy_test_cpu PASSED in 27.2s //tensorflow/python/distribute:packed_distributed_variable_test PASSED in 13.1s //tensorflow/python/distribute:parameter_server_strategy_test_2gpu PASSED in 30.2s //tensorflow/python/distribute:parameter_server_strategy_test_cpu PASSED in 25.2s //tensorflow/python/distribute:parameter_server_strategy_v2_test_2gpu PASSED in 69.6s //tensorflow/python/distribute:parameter_server_strategy_v2_test_cpu PASSED in 67.6s //tensorflow/python/distribute:per_replica_test_2gpu PASSED in 61.3s //tensorflow/python/distribute:per_replica_test_cpu PASSED in 58.9s //tensorflow/python/distribute:ps_values_test_2gpu PASSED in 34.8s //tensorflow/python/distribute:ps_values_test_cpu PASSED in 38.8s //tensorflow/python/distribute:remote_mirrored_strategy_eager_test_cpu PASSED in 16.1s //tensorflow/python/distribute:sharded_variable_test PASSED in 86.7s //tensorflow/python/distribute:shared_variable_creator_test PASSED in 8.3s //tensorflow/python/distribute:strategy_combinations_test_cpu PASSED in 74.1s //tensorflow/python/distribute:template_mirrored_strategy_test_cpu PASSED in 44.8s //tensorflow/python/distribute:test_util_test_2gpu PASSED in 57.3s //tensorflow/python/distribute:test_util_test_cpu PASSED in 57.7s //tensorflow/python/distribute:tf_function_test_2gpu PASSED in 14.6s //tensorflow/python/distribute:tf_function_test_cpu PASSED in 55.0s //tensorflow/python/distribute:values_v2_test_cpu PASSED in 18.0s //tensorflow/python/distribute:warm_starting_util_test_2gpu PASSED in 15.5s //tensorflow/python/distribute:warm_starting_util_test_cpu PASSED in 15.7s //tensorflow/python/distribute/cluster_resolver:base_cluster_resolver_py_test PASSED in 10.9s //tensorflow/python/distribute/cluster_resolver:gce_cluster_resolver_py_test PASSED in 11.2s //tensorflow/python/distribute/cluster_resolver:kubernetes_cluster_resolver_py_test PASSED in 10.5s //tensorflow/python/distribute/cluster_resolver:sagemaker_cluster_resolver_py_test PASSED in 9.8s //tensorflow/python/distribute/cluster_resolver:slurm_cluster_resolver_py_test PASSED in 10.0s //tensorflow/python/distribute/cluster_resolver:tfconfig_cluster_resolver_py_test PASSED in 10.2s //tensorflow/python/distribute/cluster_resolver/tpu:tpu_cluster_resolver_py_test PASSED in 14.9s //tensorflow/python/distribute/coordinator:watchdog_test PASSED in 65.2s //tensorflow/python/distribute/experimental:dtensor_util_test_cpu PASSED in 13.3s //tensorflow/python/distribute/experimental:mirrored_strategy_test_cpu PASSED in 33.5s //tensorflow/python/distribute/experimental:multi_worker_mirrored_strategy_test_cpu PASSED in 18.7s //tensorflow/python/distribute/integration_test:saved_model_test_cpu PASSED in 55.5s //tensorflow/python/distribute/parallel_device:parallel_device_test_cpu PASSED in 17.7s //tensorflow/python/distribute/v1:all_reduce_test PASSED in 51.8s //tensorflow/python/distribute/v1:cross_device_ops_test_cpu PASSED in 72.2s //tensorflow/python/dlpack:dlpack_test_cpu PASSED in 12.8s //tensorflow/python/eager:backprop_test_cpu PASSED in 150.5s //tensorflow/python/eager:cancellation_test_cpu PASSED in 12.0s //tensorflow/python/eager:context_test_cpu PASSED in 14.7s //tensorflow/python/eager:core_test_cpu PASSED in 26.8s //tensorflow/python/eager:gradient_input_output_exclusions_test PASSED in 48.9s //tensorflow/python/eager:graph_only_ops_test_cpu PASSED in 16.1s //tensorflow/python/eager:lift_to_graph_test PASSED in 17.5s //tensorflow/python/eager:monitoring_test_cpu PASSED in 21.6s //tensorflow/python/eager:ops_test_cpu PASSED in 30.5s //tensorflow/python/eager:profiler_client_test PASSED in 9.2s //tensorflow/python/eager:profiler_test_cpu PASSED in 24.9s //tensorflow/python/eager:pywrap_tfe_test PASSED in 41.5s //tensorflow/python/eager:record_test PASSED in 23.3s //tensorflow/python/eager:run_eager_op_as_function_test_cpu PASSED in 10.9s //tensorflow/python/eager:run_eager_op_as_function_xla_test_cpu PASSED in 9.5s //tensorflow/python/eager:small_constants_optimizer_test_cpu PASSED in 244.6s //tensorflow/python/eager:tensor_test_cpu PASSED in 25.0s //tensorflow/python/eager:wrap_function_device_test_cpu PASSED in 26.6s //tensorflow/python/eager:wrap_function_test PASSED in 25.1s //tensorflow/python/eager/memory_tests:remote_memory_test_cpu PASSED in 19.8s //tensorflow/python/eager/polymorphic_function:argument_naming_test_cpu PASSED in 30.0s //tensorflow/python/eager/polymorphic_function:atomic_function_test_cpu PASSED in 39.0s //tensorflow/python/eager/polymorphic_function:collection_test_cpu PASSED in 33.8s //tensorflow/python/eager/polymorphic_function:compiler_ir_test_cpu PASSED in 11.7s //tensorflow/python/eager/polymorphic_function:compiler_ir_test_cpu_mlir_bridge_test PASSED in 12.9s //tensorflow/python/eager/polymorphic_function:concrete_function_test_cpu PASSED in 32.7s //tensorflow/python/eager/polymorphic_function:function_spec_test PASSED in 22.4s //tensorflow/python/eager/polymorphic_function:polymorphic_function_xla_test_cpu PASSED in 11.0s //tensorflow/python/eager/polymorphic_function:tracing_compilation_test PASSED in 58.3s //tensorflow/python/feature_column:sequence_feature_column_integration_test PASSED in 13.5s //tensorflow/python/feature_column:serialization_test PASSED in 56.2s //tensorflow/python/framework:auto_control_deps_test PASSED in 57.1s //tensorflow/python/framework:c_api_util_test PASSED in 20.3s //tensorflow/python/framework:common_shapes_test PASSED in 34.9s //tensorflow/python/framework:composite_tensor_test PASSED in 10.0s //tensorflow/python/framework:config_test_2gpu PASSED in 19.7s //tensorflow/python/framework:config_test_cpu PASSED in 32.1s //tensorflow/python/framework:constant_op_test PASSED in 14.9s //tensorflow/python/framework:device_spec_test PASSED in 10.8s //tensorflow/python/framework:device_test PASSED in 11.7s //tensorflow/python/framework:dtypes_test PASSED in 29.5s //tensorflow/python/framework:error_interpolation_test PASSED in 11.9s //tensorflow/python/framework:errors_test PASSED in 11.6s //tensorflow/python/framework:extension_type_field_test PASSED in 11.1s //tensorflow/python/framework:extension_type_test PASSED in 22.6s //tensorflow/python/framework:file_system_test PASSED in 10.8s //tensorflow/python/framework:flexible_dtypes_test PASSED in 116.2s //tensorflow/python/framework:function_def_to_graph_test PASSED in 13.8s //tensorflow/python/framework:graph_util_test PASSED in 14.3s //tensorflow/python/framework:immutable_dict_test PASSED in 52.9s //tensorflow/python/framework:importer_test PASSED in 53.3s //tensorflow/python/framework:indexed_slices_test PASSED in 39.1s //tensorflow/python/framework:kernels_test PASSED in 23.5s //tensorflow/python/framework:meta_graph_test PASSED in 27.5s //tensorflow/python/framework:node_file_writer_test_cpu PASSED in 21.3s //tensorflow/python/framework:offset_counter_helper_test PASSED in 0.1s //tensorflow/python/framework:op_allowlist_namespace_test PASSED in 11.1s //tensorflow/python/framework:op_callbacks_test_cpu PASSED in 23.7s //tensorflow/python/framework:op_def_library_test PASSED in 21.5s //tensorflow/python/framework:op_def_util_test PASSED in 30.2s //tensorflow/python/framework:ops_enable_eager_test PASSED in 7.8s //tensorflow/python/framework:ops_test PASSED in 36.7s //tensorflow/python/framework:proto_test PASSED in 47.7s //tensorflow/python/framework:py_context_manager_test PASSED in 30.6s //tensorflow/python/framework:python_api_dispatcher_test PASSED in 27.3s //tensorflow/python/framework:python_api_info_test PASSED in 53.4s //tensorflow/python/framework:python_api_parameter_converter_test PASSED in 27.2s //tensorflow/python/framework:python_op_gen_annotation_test PASSED in 18.5s //tensorflow/python/framework:python_op_gen_annotator_test PASSED in 0.1s //tensorflow/python/framework:python_op_gen_test PASSED in 0.1s //tensorflow/python/framework:python_tensor_converter_test PASSED in 24.7s //tensorflow/python/framework:random_seed_test PASSED in 30.2s //tensorflow/python/framework:registry_test PASSED in 28.5s //tensorflow/python/framework:smart_cond_test PASSED in 28.8s //tensorflow/python/framework:sparse_tensor_test PASSED in 31.3s //tensorflow/python/framework:subscribe_test PASSED in 24.6s //tensorflow/python/framework:tensor_shape_test PASSED in 30.7s //tensorflow/python/framework:tensor_test PASSED in 36.0s //tensorflow/python/framework:tensor_util_test PASSED in 34.6s //tensorflow/python/framework:test_combinations_test PASSED in 8.0s //tensorflow/python/framework:test_util_test_cpu PASSED in 60.6s //tensorflow/python/framework:tf2_test PASSED in 10.6s //tensorflow/python/framework:traceable_stack_test PASSED in 30.1s //tensorflow/python/framework:type_spec_test PASSED in 27.8s //tensorflow/python/framework:versions_test PASSED in 57.3s //tensorflow/python/framework:weak_tensor_test PASSED in 57.1s //tensorflow/python/framework/experimental:unified_api_test_cpu PASSED in 43.5s //tensorflow/python/grappler:arithmetic_optimizer_test_cpu PASSED in 22.2s //tensorflow/python/grappler:auto_mixed_precision_test_cpu PASSED in 44.4s //tensorflow/python/grappler:constant_folding_test_cpu PASSED in 35.2s //tensorflow/python/grappler:cost_analyzer_test PASSED in 35.9s //tensorflow/python/grappler:datasets_test PASSED in 49.2s //tensorflow/python/grappler:item_test PASSED in 29.6s //tensorflow/python/grappler:memory_optimizer_test PASSED in 46.9s //tensorflow/python/grappler:model_analyzer_test PASSED in 30.8s //tensorflow/python/grappler:remapper_test_cpu PASSED in 58.5s //tensorflow/python/grappler:tf_optimizer_test PASSED in 68.3s //tensorflow/python/kernel_tests:benchmark_test_cpu PASSED in 35.0s //tensorflow/python/kernel_tests:check_ops_test_cpu PASSED in 53.2s //tensorflow/python/kernel_tests:collective_ops_multi_worker_test PASSED in 89.8s //tensorflow/python/kernel_tests:composite_tensor_ops_test PASSED in 43.7s //tensorflow/python/kernel_tests:critical_section_test_cpu PASSED in 50.5s //tensorflow/python/kernel_tests:garbage_collection_test PASSED in 40.8s //tensorflow/python/kernel_tests:gradient_correctness_test_cpu PASSED in 29.2s //tensorflow/python/kernel_tests:histogram_ops_test_cpu PASSED in 36.3s //tensorflow/python/kernel_tests:logging_ops_test_cpu PASSED in 51.9s //tensorflow/python/kernel_tests:numerics_test_cpu PASSED in 25.8s //tensorflow/python/kernel_tests:template_test PASSED in 39.4s //tensorflow/python/kernel_tests:trace_op_test_cpu PASSED in 20.0s //tensorflow/python/kernel_tests/array_ops:batch_gather_op_test_cpu PASSED in 27.0s //tensorflow/python/kernel_tests/array_ops:batch_scatter_ops_test PASSED in 28.9s //tensorflow/python/kernel_tests/array_ops:batchtospace_op_test_cpu PASSED in 28.9s //tensorflow/python/kernel_tests/array_ops:bcast_ops_test PASSED in 21.8s //tensorflow/python/kernel_tests/array_ops:bitcast_op_test_cpu PASSED in 24.4s //tensorflow/python/kernel_tests/array_ops:broadcast_to_ops_test_cpu PASSED in 54.3s //tensorflow/python/kernel_tests/array_ops:cast_op_test_cpu PASSED in 27.8s //tensorflow/python/kernel_tests/array_ops:constant_op_eager_test_cpu PASSED in 25.3s //tensorflow/python/kernel_tests/array_ops:constant_op_test_cpu PASSED in 23.2s //tensorflow/python/kernel_tests/array_ops:denormal_test_cpu PASSED in 25.2s //tensorflow/python/kernel_tests/array_ops:depthtospace_op_test_cpu PASSED in 36.9s //tensorflow/python/kernel_tests/array_ops:edit_distance_op_test PASSED in 26.6s //tensorflow/python/kernel_tests/array_ops:fingerprint_op_test PASSED in 23.4s //tensorflow/python/kernel_tests/array_ops:gather_nd_op_test_cpu PASSED in 23.4s //tensorflow/python/kernel_tests/array_ops:identity_n_op_py_test PASSED in 35.5s //tensorflow/python/kernel_tests/array_ops:identity_op_py_test PASSED in 23.4s //tensorflow/python/kernel_tests/array_ops:large_concat_op_test_cpu PASSED in 22.5s //tensorflow/python/kernel_tests/array_ops:manip_ops_test_cpu PASSED in 47.1s //tensorflow/python/kernel_tests/array_ops:one_hot_op_test_cpu PASSED in 22.5s //tensorflow/python/kernel_tests/array_ops:pad_op_test_cpu PASSED in 32.9s //tensorflow/python/kernel_tests/array_ops:reshape_op_test_cpu PASSED in 44.5s //tensorflow/python/kernel_tests/array_ops:reverse_sequence_op_test_cpu PASSED in 27.7s //tensorflow/python/kernel_tests/array_ops:scalar_test_cpu PASSED in 19.3s //tensorflow/python/kernel_tests/array_ops:shape_ops_test_cpu PASSED in 53.2s //tensorflow/python/kernel_tests/array_ops:slice_op_test_cpu PASSED in 32.9s //tensorflow/python/kernel_tests/array_ops:spacetobatch_op_test_cpu PASSED in 39.2s //tensorflow/python/kernel_tests/array_ops:spacetodepth_op_test_cpu PASSED in 35.2s //tensorflow/python/kernel_tests/array_ops:stack_op_test_cpu PASSED in 59.9s //tensorflow/python/kernel_tests/array_ops:unique_op_test_cpu PASSED in 32.6s //tensorflow/python/kernel_tests/array_ops:unstack_op_test_cpu PASSED in 34.9s //tensorflow/python/kernel_tests/array_ops:where_op_test_cpu PASSED in 42.5s //tensorflow/python/kernel_tests/control_flow:cond_v2_test_cpu PASSED in 184.3s //tensorflow/python/kernel_tests/control_flow:control_flow_util_test PASSED in 31.0s //tensorflow/python/kernel_tests/control_flow:control_flow_util_v2_test PASSED in 33.3s //tensorflow/python/kernel_tests/control_flow:py_func_test_cpu PASSED in 56.0s //tensorflow/python/kernel_tests/control_flow:scan_ops_test_cpu PASSED in 196.2s //tensorflow/python/kernel_tests/control_flow:while_v2_test_cpu PASSED in 189.3s //tensorflow/python/kernel_tests/custom_ops:ackermann_test PASSED in 34.7s //tensorflow/python/kernel_tests/custom_ops:duplicate_op_test PASSED in 10.2s //tensorflow/python/kernel_tests/custom_ops:invalid_op_test PASSED in 10.0s //tensorflow/python/kernel_tests/data_structures:conditional_accumulator_test PASSED in 13.0s //tensorflow/python/kernel_tests/data_structures:dynamic_partition_op_test_2gpu PASSED in 16.9s //tensorflow/python/kernel_tests/data_structures:dynamic_partition_op_test_cpu PASSED in 16.8s //tensorflow/python/kernel_tests/data_structures:dynamic_stitch_op_test_cpu PASSED in 11.1s //tensorflow/python/kernel_tests/data_structures:fifo_queue_test PASSED in 13.5s //tensorflow/python/kernel_tests/data_structures:list_ops_test_cpu PASSED in 28.9s //tensorflow/python/kernel_tests/data_structures:listdiff_op_test PASSED in 13.3s //tensorflow/python/kernel_tests/data_structures:lookup_ops_test PASSED in 29.8s //tensorflow/python/kernel_tests/data_structures:map_ops_test PASSED in 19.5s //tensorflow/python/kernel_tests/data_structures:padding_fifo_queue_test_cpu PASSED in 16.2s //tensorflow/python/kernel_tests/data_structures:priority_queue_test PASSED in 28.0s //tensorflow/python/kernel_tests/data_structures:stack_ops_test_cpu PASSED in 26.2s //tensorflow/python/kernel_tests/data_structures:stage_op_test_cpu PASSED in 35.5s //tensorflow/python/kernel_tests/distributions:bernoulli_test_cpu PASSED in 42.7s //tensorflow/python/kernel_tests/distributions:bijector_test_cpu PASSED in 63.7s //tensorflow/python/kernel_tests/distributions:categorical_test_cpu PASSED in 34.3s //tensorflow/python/kernel_tests/distributions:dirichlet_multinomial_test_cpu PASSED in 43.0s //tensorflow/python/kernel_tests/distributions:dirichlet_test_cpu PASSED in 56.8s //tensorflow/python/kernel_tests/distributions:exponential_test_cpu PASSED in 59.9s //tensorflow/python/kernel_tests/distributions:gamma_test_cpu PASSED in 118.3s //tensorflow/python/kernel_tests/distributions:identity_bijector_test_cpu PASSED in 40.0s //tensorflow/python/kernel_tests/distributions:kullback_leibler_test_cpu PASSED in 38.3s //tensorflow/python/kernel_tests/distributions:laplace_test_cpu PASSED in 78.6s //tensorflow/python/kernel_tests/distributions:multinomial_test_cpu PASSED in 63.9s //tensorflow/python/kernel_tests/distributions:normal_test_cpu PASSED in 68.5s //tensorflow/python/kernel_tests/distributions:special_math_test_cpu PASSED in 41.8s //tensorflow/python/kernel_tests/distributions:uniform_test_cpu PASSED in 52.0s //tensorflow/python/kernel_tests/image_ops:attention_ops_test PASSED in 35.8s //tensorflow/python/kernel_tests/image_ops:decode_bmp_op_test PASSED in 33.3s //tensorflow/python/kernel_tests/image_ops:decode_compressed_op_test PASSED in 32.0s //tensorflow/python/kernel_tests/image_ops:decode_image_op_test PASSED in 30.5s //tensorflow/python/kernel_tests/image_ops:decode_png_op_test PASSED in 27.8s //tensorflow/python/kernel_tests/image_ops:decode_raw_op_test PASSED in 26.7s //tensorflow/python/kernel_tests/image_ops:draw_bounding_box_op_test_cpu PASSED in 32.2s //tensorflow/python/kernel_tests/image_ops:extract_image_patches_op_test_cpu PASSED in 32.9s //tensorflow/python/kernel_tests/image_ops:extract_volume_patches_op_test_cpu PASSED in 44.1s //tensorflow/python/kernel_tests/io_ops:checkpoint_ops_test PASSED in 60.3s //tensorflow/python/kernel_tests/io_ops:decode_csv_op_test PASSED in 30.3s //tensorflow/python/kernel_tests/io_ops:io_ops_test PASSED in 26.4s //tensorflow/python/kernel_tests/io_ops:parse_single_example_op_test PASSED in 32.2s //tensorflow/python/kernel_tests/io_ops:parsing_ops_test PASSED in 53.2s //tensorflow/python/kernel_tests/io_ops:reader_ops_test PASSED in 55.7s //tensorflow/python/kernel_tests/io_ops:record_input_test PASSED in 106.4s //tensorflow/python/kernel_tests/io_ops:save_restore_ops_test PASSED in 33.8s //tensorflow/python/kernel_tests/linalg:determinant_op_test_cpu PASSED in 20.3s //tensorflow/python/kernel_tests/linalg:linear_operator_addition_test_cpu PASSED in 41.3s //tensorflow/python/kernel_tests/linalg:linear_operator_test_cpu PASSED in 30.0s //tensorflow/python/kernel_tests/linalg:lu_op_test_cpu PASSED in 19.0s //tensorflow/python/kernel_tests/linalg:matrix_inverse_op_test_cpu PASSED in 23.1s //tensorflow/python/kernel_tests/linalg:matrix_logarithm_op_test PASSED in 180.0s //tensorflow/python/kernel_tests/linalg:matrix_solve_ls_op_test_cpu PASSED in 180.7s //tensorflow/python/kernel_tests/linalg:matrix_solve_op_test_cpu PASSED in 83.9s //tensorflow/python/kernel_tests/linalg:matrix_square_root_op_test_cpu PASSED in 23.8s //tensorflow/python/kernel_tests/linalg:slicing_test_cpu PASSED in 23.2s //tensorflow/python/kernel_tests/linalg/sparse:conjugate_gradient_test_cpu PASSED in 36.1s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_test_cpu PASSED in 25.1s //tensorflow/python/kernel_tests/math_ops:aggregate_ops_test_cpu PASSED in 39.9s //tensorflow/python/kernel_tests/math_ops:argmax_op_test_cpu PASSED in 24.4s //tensorflow/python/kernel_tests/math_ops:banded_triangular_solve_op_test_cpu PASSED in 61.2s //tensorflow/python/kernel_tests/math_ops:basic_gpu_test_cpu PASSED in 31.3s //tensorflow/python/kernel_tests/math_ops:bincount_op_test_cpu PASSED in 30.9s //tensorflow/python/kernel_tests/math_ops:bucketize_op_test_cpu PASSED in 19.0s //tensorflow/python/kernel_tests/math_ops:clip_ops_test PASSED in 26.2s //tensorflow/python/kernel_tests/math_ops:confusion_matrix_test PASSED in 37.9s //tensorflow/python/kernel_tests/math_ops:cross_grad_test_cpu PASSED in 35.3s //tensorflow/python/kernel_tests/math_ops:cumulative_logsumexp_test_cpu PASSED in 32.5s //tensorflow/python/kernel_tests/math_ops:in_topk_op_test_cpu PASSED in 31.7s //tensorflow/python/kernel_tests/math_ops:segment_reduction_ops_d9m_test_cpu PASSED in 16.6s //tensorflow/python/kernel_tests/math_ops:sets_test PASSED in 101.3s //tensorflow/python/kernel_tests/math_ops:topk_op_test_cpu PASSED in 33.0s //tensorflow/python/kernel_tests/math_ops:zero_division_test_cpu PASSED in 29.0s //tensorflow/python/kernel_tests/nn_ops:betainc_op_test_cpu PASSED in 43.6s //tensorflow/python/kernel_tests/nn_ops:bias_op_test_cpu PASSED in 156.9s //tensorflow/python/kernel_tests/nn_ops:conv1d_test_cpu PASSED in 25.2s //tensorflow/python/kernel_tests/nn_ops:conv1d_transpose_test_cpu PASSED in 32.0s //tensorflow/python/kernel_tests/nn_ops:conv2d_transpose_test_cpu PASSED in 42.3s //tensorflow/python/kernel_tests/nn_ops:conv3d_backprop_filter_v2_grad_test_cpu PASSED in 58.0s //tensorflow/python/kernel_tests/nn_ops:conv3d_transpose_test_cpu PASSED in 32.7s //tensorflow/python/kernel_tests/nn_ops:ctc_decoder_ops_test PASSED in 24.3s //tensorflow/python/kernel_tests/nn_ops:ctc_loss_op_test_cpu PASSED in 182.6s //tensorflow/python/kernel_tests/nn_ops:cudnn_d9m_test_cpu PASSED in 10.2s //tensorflow/python/kernel_tests/nn_ops:cudnn_deterministic_ops_test_cpu PASSED in 9.7s //tensorflow/python/kernel_tests/nn_ops:losses_test PASSED in 107.5s //tensorflow/python/kernel_tests/nn_ops:lrn_op_test_cpu PASSED in 40.7s //tensorflow/python/kernel_tests/nn_ops:morphological_ops_test_cpu PASSED in 37.5s //tensorflow/python/kernel_tests/nn_ops:nth_element_op_test_cpu PASSED in 36.3s //tensorflow/python/kernel_tests/nn_ops:pool_test_cpu PASSED in 115.6s //tensorflow/python/kernel_tests/nn_ops:pooling_ops_3d_test_cpu PASSED in 65.4s //tensorflow/python/kernel_tests/nn_ops:relu_op_test_cpu PASSED in 30.6s //tensorflow/python/kernel_tests/nn_ops:softmax_op_test_cpu PASSED in 24.3s //tensorflow/python/kernel_tests/nn_ops:softplus_op_test_cpu PASSED in 35.4s //tensorflow/python/kernel_tests/nn_ops:softsign_op_test_cpu PASSED in 31.4s //tensorflow/python/kernel_tests/nn_ops:xent_op_d9m_test_cpu PASSED in 135.6s //tensorflow/python/kernel_tests/nn_ops:xent_op_test_cpu PASSED in 9.8s //tensorflow/python/kernel_tests/proto:decode_proto_op_test PASSED in 34.9s //tensorflow/python/kernel_tests/proto:descriptor_source_test PASSED in 21.9s //tensorflow/python/kernel_tests/proto:encode_proto_op_test PASSED in 22.7s //tensorflow/python/kernel_tests/quantization_ops:quantization_ops_test PASSED in 31.7s //tensorflow/python/kernel_tests/random:candidate_sampler_ops_test PASSED in 23.9s //tensorflow/python/kernel_tests/random:multinomial_op_test_cpu PASSED in 44.7s //tensorflow/python/kernel_tests/random:parameterized_truncated_normal_op_test_cpu PASSED in 45.8s //tensorflow/python/kernel_tests/random:random_crop_test_cpu PASSED in 41.8s //tensorflow/python/kernel_tests/random:random_grad_test_cpu PASSED in 26.4s //tensorflow/python/kernel_tests/random:random_ops_test_cpu PASSED in 49.8s //tensorflow/python/kernel_tests/random:random_poisson_test_cpu PASSED in 44.1s //tensorflow/python/kernel_tests/random:random_shuffle_queue_test PASSED in 34.3s //tensorflow/python/kernel_tests/random:stateful_random_ops_test_cpu PASSED in 64.0s //tensorflow/python/kernel_tests/signal:mel_ops_test_cpu PASSED in 59.4s //tensorflow/python/kernel_tests/signal:mfcc_ops_test_cpu PASSED in 28.4s //tensorflow/python/kernel_tests/signal:reconstruction_ops_test_cpu PASSED in 47.1s //tensorflow/python/kernel_tests/signal:shape_ops_test_cpu PASSED in 90.5s //tensorflow/python/kernel_tests/sparse_ops:sparse_add_op_test PASSED in 40.3s //tensorflow/python/kernel_tests/sparse_ops:sparse_concat_op_test PASSED in 41.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_conditional_accumulator_test PASSED in 34.4s //tensorflow/python/kernel_tests/sparse_ops:sparse_cross_op_test PASSED in 49.3s //tensorflow/python/kernel_tests/sparse_ops:sparse_matmul_op_test_cpu PASSED in 136.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_reorder_op_test PASSED in 23.7s //tensorflow/python/kernel_tests/sparse_ops:sparse_reshape_op_test PASSED in 41.6s //tensorflow/python/kernel_tests/sparse_ops:sparse_serialization_ops_test PASSED in 41.9s //tensorflow/python/kernel_tests/sparse_ops:sparse_slice_op_test PASSED in 42.1s //tensorflow/python/kernel_tests/sparse_ops:sparse_split_op_test_cpu PASSED in 42.5s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_grad_test_cpu PASSED in 65.8s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_op_d9m_test_cpu PASSED in 110.8s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_op_test_cpu PASSED in 118.5s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensors_map_ops_test PASSED in 37.7s //tensorflow/python/kernel_tests/sparse_ops:sparse_to_dense_op_py_test_cpu PASSED in 25.5s //tensorflow/python/kernel_tests/sparse_ops:sparse_xent_op_d9m_test_cpu PASSED in 183.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_xent_op_test_cpu PASSED in 40.9s //tensorflow/python/kernel_tests/sparse_ops:sparsemask_op_test PASSED in 35.1s //tensorflow/python/kernel_tests/strings_ops:as_string_op_test PASSED in 30.4s //tensorflow/python/kernel_tests/strings_ops:base64_ops_test PASSED in 55.2s //tensorflow/python/kernel_tests/strings_ops:reduce_join_op_test_cpu PASSED in 48.5s //tensorflow/python/kernel_tests/strings_ops:regex_full_match_op_test PASSED in 38.2s //tensorflow/python/kernel_tests/strings_ops:regex_replace_op_test PASSED in 32.9s //tensorflow/python/kernel_tests/strings_ops:string_bytes_split_op_test PASSED in 42.4s //tensorflow/python/kernel_tests/strings_ops:string_format_op_test PASSED in 29.5s //tensorflow/python/kernel_tests/strings_ops:string_join_op_test PASSED in 49.1s //tensorflow/python/kernel_tests/strings_ops:string_length_op_test PASSED in 30.2s //tensorflow/python/kernel_tests/strings_ops:string_lower_op_test PASSED in 26.7s //tensorflow/python/kernel_tests/strings_ops:string_split_op_test PASSED in 46.8s //tensorflow/python/kernel_tests/strings_ops:string_strip_op_test PASSED in 30.0s //tensorflow/python/kernel_tests/strings_ops:string_to_hash_bucket_op_test_cpu PASSED in 20.9s //tensorflow/python/kernel_tests/strings_ops:string_to_number_op_test_cpu PASSED in 25.2s //tensorflow/python/kernel_tests/strings_ops:string_upper_op_test PASSED in 29.9s //tensorflow/python/kernel_tests/strings_ops:substr_op_test PASSED in 37.6s //tensorflow/python/kernel_tests/strings_ops:unicode_decode_op_test PASSED in 52.1s //tensorflow/python/kernel_tests/strings_ops:unicode_encode_op_test PASSED in 21.3s //tensorflow/python/kernel_tests/strings_ops:unicode_script_op_test PASSED in 29.0s //tensorflow/python/kernel_tests/strings_ops:unicode_transcode_op_test PASSED in 31.9s //tensorflow/python/kernel_tests/strings_ops:unsorted_segment_join_op_test_cpu PASSED in 32.9s //tensorflow/python/kernel_tests/summary_ops:summary_ops_test_cpu PASSED in 51.0s //tensorflow/python/kernel_tests/summary_ops:summary_v1_audio_op_test_cpu PASSED in 22.1s //tensorflow/python/kernel_tests/summary_ops:summary_v1_image_op_test_cpu PASSED in 30.9s //tensorflow/python/kernel_tests/summary_ops:summary_v1_ops_test PASSED in 30.3s //tensorflow/python/kernel_tests/summary_ops:summary_v1_tensor_op_test PASSED in 28.2s //tensorflow/python/kernel_tests/v1_compat_tests:array_ops_test_cpu PASSED in 24.4s //tensorflow/python/kernel_tests/v1_compat_tests:dense_update_ops_test_cpu PASSED in 21.4s //tensorflow/python/kernel_tests/v1_compat_tests:identity_op_py_test PASSED in 39.0s //tensorflow/python/kernel_tests/v1_compat_tests:scatter_nd_ops_test_cpu PASSED in 29.5s //tensorflow/python/kernel_tests/v1_compat_tests:session_ops_test_cpu PASSED in 23.9s //tensorflow/python/kernel_tests/v1_compat_tests:stack_op_test_cpu PASSED in 26.9s //tensorflow/python/kernel_tests/variables:dense_update_ops_no_tsan_test_cpu PASSED in 21.0s //tensorflow/python/kernel_tests/variables:dense_update_ops_test_cpu PASSED in 23.5s //tensorflow/python/kernel_tests/variables:partitioned_variables_test PASSED in 24.8s //tensorflow/python/kernel_tests/variables:resource_variable_ops_test_cpu PASSED in 97.0s //tensorflow/python/kernel_tests/variables:variable_ops_test_cpu PASSED in 25.8s //tensorflow/python/kernel_tests/variables:variable_scope_test PASSED in 59.7s //tensorflow/python/kernel_tests/variables:variables_test PASSED in 44.0s //tensorflow/python/lib/io:file_io_test PASSED in 12.8s //tensorflow/python/lib/io:tf_record_test PASSED in 12.3s //tensorflow/python/module:module_test PASSED in 13.6s //tensorflow/python/ops:array_grad_test_cpu PASSED in 14.2s //tensorflow/python/ops:array_ops_shape_test PASSED in 9.3s //tensorflow/python/ops:array_ops_test PASSED in 9.2s //tensorflow/python/ops:autograph_ops_test PASSED in 9.7s //tensorflow/python/ops:bincount_ops_test_cpu PASSED in 14.0s //tensorflow/python/ops:bitwise_ops_test_cpu PASSED in 12.7s //tensorflow/python/ops:clip_ops_test PASSED in 13.7s //tensorflow/python/ops:clustering_ops_test PASSED in 25.2s //tensorflow/python/ops:collective_ops_gpu_test_cpu PASSED in 13.4s //tensorflow/python/ops:collective_ops_test PASSED in 21.8s //tensorflow/python/ops:collective_ops_xla_test PASSED in 10.8s //tensorflow/python/ops:compiled_collective_ops_gpu_test_2gpu PASSED in 13.1s //tensorflow/python/ops:compiled_collective_ops_gpu_test_cpu PASSED in 12.6s //tensorflow/python/ops:control_flow_v2_enable_test PASSED in 10.0s //tensorflow/python/ops:control_flow_v2_toggles_test PASSED in 10.9s //tensorflow/python/ops:dequantize_op_test PASSED in 11.0s //tensorflow/python/ops:embedding_ops_test_cpu PASSED in 12.0s //tensorflow/python/ops:factory_ops_test_cpu PASSED in 13.8s //tensorflow/python/ops:functional_ops_test PASSED in 10.2s //tensorflow/python/ops:gradient_checker_v2_test_cpu PASSED in 33.8s //tensorflow/python/ops:gradients_test_cpu PASSED in 21.8s //tensorflow/python/ops:init_ops_test_cpu PASSED in 14.9s //tensorflow/python/ops:init_ops_v2_test_cpu PASSED in 61.4s //tensorflow/python/ops:lookup_ops_async_checkpoint_test PASSED in 33.8s //tensorflow/python/ops:math_grad_test_cpu PASSED in 31.2s //tensorflow/python/ops:math_ops_linspace_test_cpu PASSED in 22.4s //tensorflow/python/ops:math_ops_test_cpu PASSED in 64.5s //tensorflow/python/ops:nn_grad_test_cpu PASSED in 36.5s //tensorflow/python/ops:nn_loss_scaling_utilities_test PASSED in 13.8s //tensorflow/python/ops:nn_test_cpu PASSED in 93.4s //tensorflow/python/ops:nn_xent_test_cpu PASSED in 25.0s //tensorflow/python/ops:op_selector_test PASSED in 10.2s //tensorflow/python/ops:quantized_conv_ops_test PASSED in 25.5s //tensorflow/python/ops:quantized_ops_test PASSED in 33.9s //tensorflow/python/ops:raw_ops_test_cpu PASSED in 36.0s //tensorflow/python/ops:rnn_grad_test_cpu PASSED in 19.9s //tensorflow/python/ops:script_ops_test PASSED in 9.7s //tensorflow/python/ops:sort_ops_test PASSED in 37.4s //tensorflow/python/ops:sparse_bincount_ops_test_cpu PASSED in 38.9s //tensorflow/python/ops:sparse_ops_test PASSED in 43.5s //tensorflow/python/ops:tensor_array_ops_test PASSED in 9.7s //tensorflow/python/ops:variable_spec_test PASSED in 21.3s //tensorflow/python/ops:weak_tensor_array_ops_test PASSED in 10.3s //tensorflow/python/ops:weak_tensor_constant_op_test PASSED in 61.7s //tensorflow/python/ops:weak_tensor_image_ops_test PASSED in 10.3s //tensorflow/python/ops:weak_tensor_math_ops_test PASSED in 24.5s //tensorflow/python/ops:weak_tensor_nn_test_cpu PASSED in 42.9s //tensorflow/python/ops:weak_tensor_np_array_ops_test PASSED in 45.5s //tensorflow/python/ops:weak_tensor_np_math_ops_test PASSED in 10.7s //tensorflow/python/ops:weak_tensor_ops_test PASSED in 103.5s //tensorflow/python/ops/losses:util_test PASSED in 10.4s //tensorflow/python/ops/memory_tests:custom_gradient_memory_test_cpu PASSED in 25.6s //tensorflow/python/ops/numpy_ops:np_array_ops_test_cpu PASSED in 156.9s //tensorflow/python/ops/numpy_ops:np_arrays_test_cpu PASSED in 36.8s //tensorflow/python/ops/numpy_ops:np_dtypes_test_cpu PASSED in 26.0s //tensorflow/python/ops/numpy_ops:np_interop_test_cpu PASSED in 122.2s //tensorflow/python/ops/numpy_ops:np_logic_test_cpu PASSED in 32.8s //tensorflow/python/ops/numpy_ops:np_math_ops_test_cpu PASSED in 51.4s //tensorflow/python/ops/numpy_ops:np_random_test_cpu PASSED in 138.4s //tensorflow/python/ops/numpy_ops:np_utils_test_cpu PASSED in 15.2s //tensorflow/python/ops/numpy_ops/integration_test:np_config_test_cpu PASSED in 28.5s //tensorflow/python/ops/numpy_ops/integration_test:public_symbol_test PASSED in 67.4s //tensorflow/python/ops/parallel_for:array_test_cpu PASSED in 46.0s //tensorflow/python/ops/parallel_for:gradients_test_cpu PASSED in 49.3s //tensorflow/python/ops/parallel_for:pfor_test PASSED in 10.0s //tensorflow/python/ops/parallel_for:xla_control_flow_ops_test_cpu PASSED in 58.1s //tensorflow/python/ops/ragged:convert_to_tensor_or_ragged_tensor_op_test PASSED in 9.5s //tensorflow/python/ops/ragged:ragged_batch_gather_op_test PASSED in 62.1s //tensorflow/python/ops/ragged:ragged_bincount_ops_test_cpu PASSED in 47.6s //tensorflow/python/ops/ragged:ragged_bitcast_op_test PASSED in 17.4s //tensorflow/python/ops/ragged:ragged_boolean_mask_op_test PASSED in 20.8s //tensorflow/python/ops/ragged:ragged_concat_op_test PASSED in 15.8s //tensorflow/python/ops/ragged:ragged_const_op_test PASSED in 9.4s //tensorflow/python/ops/ragged:ragged_constant_value_op_test PASSED in 9.2s //tensorflow/python/ops/ragged:ragged_cross_op_test PASSED in 28.8s //tensorflow/python/ops/ragged:ragged_dispatch_test PASSED in 145.5s //tensorflow/python/ops/ragged:ragged_dynamic_partition_op_test_cpu PASSED in 45.9s //tensorflow/python/ops/ragged:ragged_eager_test PASSED in 13.5s //tensorflow/python/ops/ragged:ragged_expand_dims_op_test PASSED in 9.5s //tensorflow/python/ops/ragged:ragged_factory_ops_test_cpu PASSED in 57.7s //tensorflow/python/ops/ragged:ragged_fill_empty_rows_op_test PASSED in 11.5s //tensorflow/python/ops/ragged:ragged_from_sparse_op_test PASSED in 10.6s //tensorflow/python/ops/ragged:ragged_from_tensor_op_test PASSED in 24.4s //tensorflow/python/ops/ragged:ragged_gather_nd_op_test PASSED in 14.0s //tensorflow/python/ops/ragged:ragged_map_flat_values_op_test PASSED in 12.5s //tensorflow/python/ops/ragged:ragged_map_fn_op_test PASSED in 17.1s //tensorflow/python/ops/ragged:ragged_math_ops_test PASSED in 17.3s //tensorflow/python/ops/ragged:ragged_matmul_op_test PASSED in 41.7s //tensorflow/python/ops/ragged:ragged_merge_dims_op_test PASSED in 34.8s //tensorflow/python/ops/ragged:ragged_one_hot_op_test PASSED in 26.2s //tensorflow/python/ops/ragged:ragged_operators_test PASSED in 20.0s //tensorflow/python/ops/ragged:ragged_placeholder_op_test PASSED in 8.4s //tensorflow/python/ops/ragged:ragged_print_op_test PASSED in 14.1s //tensorflow/python/ops/ragged:ragged_range_op_test PASSED in 8.2s //tensorflow/python/ops/ragged:ragged_rank_op_test PASSED in 8.0s //tensorflow/python/ops/ragged:ragged_reduce_op_test PASSED in 29.7s //tensorflow/python/ops/ragged:ragged_resize_image_op_test PASSED in 17.9s //tensorflow/python/ops/ragged:ragged_reverse_op_test PASSED in 8.6s //tensorflow/python/ops/ragged:ragged_row_lengths_op_test PASSED in 8.8s //tensorflow/python/ops/ragged:ragged_row_splits_to_segment_ids_op_test PASSED in 8.9s //tensorflow/python/ops/ragged:ragged_segment_ids_to_row_splits_op_test PASSED in 9.6s //tensorflow/python/ops/ragged:ragged_segment_op_test PASSED in 14.6s //tensorflow/python/ops/ragged:ragged_size_op_test PASSED in 8.5s //tensorflow/python/ops/ragged:ragged_split_op_test PASSED in 55.8s //tensorflow/python/ops/ragged:ragged_squeeze_op_test PASSED in 16.8s //tensorflow/python/ops/ragged:ragged_stack_op_test PASSED in 12.8s //tensorflow/python/ops/ragged:ragged_tensor_bounding_shape_op_test PASSED in 10.2s //tensorflow/python/ops/ragged:ragged_tensor_shape_test PASSED in 71.2s //tensorflow/python/ops/ragged:ragged_tile_op_test PASSED in 55.3s //tensorflow/python/ops/ragged:ragged_to_sparse_op_test PASSED in 8.6s //tensorflow/python/ops/ragged:ragged_to_tensor_op_test PASSED in 87.1s //tensorflow/python/ops/ragged:ragged_util_test PASSED in 37.4s //tensorflow/python/ops/ragged:ragged_where_op_test PASSED in 45.5s //tensorflow/python/ops/ragged:row_partition_test PASSED in 42.4s //tensorflow/python/ops/ragged:string_ngrams_op_test PASSED in 10.2s //tensorflow/python/ops/ragged:strings_reduce_join_op_test PASSED in 11.3s //tensorflow/python/ops/structured:structured_array_ops_test PASSED in 53.9s //tensorflow/python/ops/structured:structured_tensor_slice_test PASSED in 66.2s //tensorflow/python/ops/structured:structured_tensor_spec_test PASSED in 12.1s //tensorflow/python/ops/structured:structured_tensor_test PASSED in 53.3s //tensorflow/python/ops/v1_compat_tests:gradient_checker_test_cpu PASSED in 31.7s //tensorflow/python/platform:benchmark_test PASSED in 34.6s //tensorflow/python/platform:build_info_test PASSED in 27.0s //tensorflow/python/platform:resource_loader_test PASSED in 13.2s //tensorflow/python/profiler:pprof_profiler_test PASSED in 10.0s //tensorflow/python/profiler:profile_context_test_cpu PASSED in 97.6s //tensorflow/python/profiler:profiler_client_test_cpu PASSED in 32.0s //tensorflow/python/profiler:profiler_test_cpu PASSED in 52.8s //tensorflow/python/profiler:profiler_v2_test_cpu PASSED in 49.3s //tensorflow/python/profiler:profiler_wrapper_test PASSED in 7.8s //tensorflow/python/profiler:tfprof_logger_test PASSED in 30.4s //tensorflow/python/profiler/internal:flops_registry_test PASSED in 9.3s //tensorflow/python/profiler/internal:print_model_analysis_test PASSED in 10.3s //tensorflow/python/profiler/internal:run_metadata_test_cpu PASSED in 53.2s //tensorflow/python/saved_model:fingerprinting_test PASSED in 40.5s //tensorflow/python/saved_model:load_v1_in_v2_test PASSED in 114.5s //tensorflow/python/saved_model:loader_test PASSED in 85.7s //tensorflow/python/saved_model:method_name_updater_test PASSED in 24.0s //tensorflow/python/saved_model:metrics_test PASSED in 55.4s //tensorflow/python/saved_model:nested_structure_coder_test PASSED in 24.8s //tensorflow/python/saved_model:pywrap_saved_model_fingerprinting_test PASSED in 23.2s //tensorflow/python/saved_model:pywrap_saved_model_metrics_test PASSED in 35.1s //tensorflow/python/saved_model:revived_types_test PASSED in 24.0s //tensorflow/python/saved_model:save_context_test PASSED in 37.2s //tensorflow/python/saved_model:save_test PASSED in 193.4s //tensorflow/python/saved_model:saved_model_test PASSED in 172.2s //tensorflow/python/saved_model:signature_def_utils_test PASSED in 30.5s //tensorflow/python/saved_model:simple_save_test PASSED in 33.4s //tensorflow/python/saved_model:tracing_utils_test PASSED in 27.5s //tensorflow/python/saved_model:utils_test PASSED in 45.7s //tensorflow/python/saved_model/model_utils:export_output_test PASSED in 11.4s //tensorflow/python/saved_model/model_utils:export_test PASSED in 14.7s //tensorflow/python/saved_model/model_utils:mode_keys_test PASSED in 9.5s //tensorflow/python/saved_model/registration:registration_saving_test PASSED in 140.1s //tensorflow/python/saved_model/registration:registration_test PASSED in 31.9s //tensorflow/python/saved_model/registration:tf_registration_test PASSED in 107.0s //tensorflow/python/saved_model/tests:variable_wrapper_test PASSED in 50.9s //tensorflow/python/summary:plugin_asset_test PASSED in 58.0s //tensorflow/python/summary:summary_iterator_test PASSED in 23.8s //tensorflow/python/summary:summary_test PASSED in 35.6s //tensorflow/python/summary:summary_v2_test PASSED in 49.0s //tensorflow/python/summary/writer:writer_test PASSED in 59.4s //tensorflow/python/tools:aot_compiled_test PASSED in 22.8s //tensorflow/python/tools:freeze_graph_test PASSED in 12.5s //tensorflow/python/tools:optimize_for_inference_test PASSED in 10.5s //tensorflow/python/tools:print_selective_registration_header_test PASSED in 9.3s //tensorflow/python/tools:saved_model_cli_test PASSED in 27.8s //tensorflow/python/tools:saved_model_utils_test PASSED in 11.0s //tensorflow/python/tools:strip_unused_test PASSED in 10.0s //tensorflow/python/tools/api/generator:create_python_api_test PASSED in 10.3s //tensorflow/python/tools/api/generator:output_init_files_test PASSED in 20.2s //tensorflow/python/tools/api/generator:tensorflow_doc_srcs_test PASSED in 9.8s //tensorflow/python/tools/api/generator2/extractor:extractor_test PASSED in 2.3s //tensorflow/python/tools/api/generator2/generator:generator_test PASSED in 2.0s //tensorflow/python/tools/api/generator2/shared:exported_api_test PASSED in 9.8s //tensorflow/python/tpu:bfloat16_test PASSED in 19.0s //tensorflow/python/tpu:feature_column_test PASSED in 39.9s //tensorflow/python/tpu:topology_test PASSED in 19.7s //tensorflow/python/tpu:tpu_embedding_for_serving_test PASSED in 37.5s //tensorflow/python/tpu:tpu_embedding_v2_utils_test PASSED in 37.6s //tensorflow/python/tpu:tpu_embedding_v3_checkpoint_adapter_test PASSED in 25.9s //tensorflow/python/tpu:tpu_embedding_v3_utils_test PASSED in 28.6s //tensorflow/python/tpu:tpu_infeed_test PASSED in 12.0s //tensorflow/python/tpu:tpu_sharding_test PASSED in 30.2s //tensorflow/python/tpu:tpu_test_wrapper_test PASSED in 10.0s //tensorflow/python/tpu/client:client_py_test PASSED in 27.0s //tensorflow/python/trackable:autotrackable_test PASSED in 25.4s //tensorflow/python/trackable:base_delegate_test PASSED in 31.1s //tensorflow/python/trackable:base_test PASSED in 22.6s //tensorflow/python/trackable:python_state_test PASSED in 30.0s //tensorflow/python/trackable:resource_test PASSED in 27.6s //tensorflow/python/trackable:trackable_utils_test PASSED in 39.2s //tensorflow/python/training:adadelta_test_cpu PASSED in 47.7s //tensorflow/python/training:adagrad_da_test_cpu PASSED in 31.7s //tensorflow/python/training:adagrad_test_cpu PASSED in 46.2s //tensorflow/python/training:adam_test_cpu PASSED in 75.6s //tensorflow/python/training:basic_loops_test_cpu PASSED in 32.0s //tensorflow/python/training:basic_session_run_hooks_test PASSED in 30.9s //tensorflow/python/training:checkpoint_ops_test PASSED in 32.3s //tensorflow/python/training:coordinator_test_cpu PASSED in 35.7s //tensorflow/python/training:device_setter_test_cpu PASSED in 30.9s //tensorflow/python/training:ftrl_test_cpu PASSED in 79.3s //tensorflow/python/training:gradient_descent_test_cpu PASSED in 39.4s //tensorflow/python/training:input_test PASSED in 68.7s //tensorflow/python/training:momentum_test_cpu PASSED in 47.6s //tensorflow/python/training:monitored_session_test PASSED in 93.2s //tensorflow/python/training:moving_averages_test_cpu PASSED in 70.3s //tensorflow/python/training:optimizer_test_cpu PASSED in 43.7s //tensorflow/python/training:proximal_adagrad_test_cpu PASSED in 32.8s //tensorflow/python/training:proximal_gradient_descent_test_cpu PASSED in 31.0s //tensorflow/python/training:quantize_training_test_cpu PASSED in 16.3s //tensorflow/python/training:queue_runner_test_cpu PASSED in 26.9s //tensorflow/python/training:rmsprop_test_cpu PASSED in 97.0s //tensorflow/python/training:saver_large_partitioned_variable_test PASSED in 33.1s //tensorflow/python/training:saver_test_2gpu PASSED in 108.4s //tensorflow/python/training:saver_test_cpu PASSED in 102.1s //tensorflow/python/training:server_lib_multiple_containers_test PASSED in 27.1s //tensorflow/python/training:server_lib_same_variables_clear_container_test PASSED in 28.4s //tensorflow/python/training:server_lib_same_variables_clear_test PASSED in 31.7s //tensorflow/python/training:server_lib_same_variables_no_clear_test PASSED in 28.8s //tensorflow/python/training:server_lib_sparse_job_test PASSED in 29.3s //tensorflow/python/training:server_lib_test PASSED in 57.6s //tensorflow/python/training:session_manager_test_cpu PASSED in 110.7s //tensorflow/python/training:slot_creator_test_cpu PASSED in 39.8s //tensorflow/python/training:supervisor_test PASSED in 41.4s //tensorflow/python/training:training_ops_mlir_test_cpu PASSED in 31.4s //tensorflow/python/training:training_ops_test_cpu PASSED in 29.7s //tensorflow/python/training:training_util_test PASSED in 44.9s //tensorflow/python/training:warm_starting_util_test PASSED in 73.2s //tensorflow/python/training/experimental:loss_scale_optimizer_test PASSED in 21.1s //tensorflow/python/training/experimental:loss_scale_test PASSED in 40.2s //tensorflow/python/training/experimental:mixed_precision_test_cpu PASSED in 43.1s //tensorflow/python/training/saving:saveable_object_util_test PASSED in 25.5s //tensorflow/python/util:compat_test PASSED in 30.6s //tensorflow/python/util:decorator_utils_test PASSED in 32.2s //tensorflow/python/util:deprecation_test PASSED in 29.0s //tensorflow/python/util:dispatch_test PASSED in 34.2s //tensorflow/python/util:example_parser_configuration_test PASSED in 36.6s //tensorflow/python/util:fast_module_type_test PASSED in 33.6s //tensorflow/python/util:function_parameter_canonicalizer_test PASSED in 24.2s //tensorflow/python/util:function_utils_test PASSED in 38.7s //tensorflow/python/util:keyword_args_test PASSED in 23.6s //tensorflow/python/util:lazy_loader_test PASSED in 29.6s //tensorflow/python/util:lock_util_test PASSED in 30.7s //tensorflow/python/util:module_wrapper_test PASSED in 26.5s //tensorflow/python/util:nest_test PASSED in 29.2s //tensorflow/python/util:object_identity_test PASSED in 11.1s //tensorflow/python/util:pywrap_xla_ops_test PASSED in 4.0s //tensorflow/python/util:serialization_test PASSED in 11.3s //tensorflow/python/util:tf_contextlib_test PASSED in 12.4s //tensorflow/python/util:tf_decorator_test PASSED in 10.7s //tensorflow/python/util:tf_export_test PASSED in 11.2s //tensorflow/python/util:tf_inspect_test PASSED in 12.5s //tensorflow/python/util:tf_should_use_test PASSED in 12.0s //tensorflow/python/util:tf_stack_test PASSED in 12.1s //tensorflow/python/util:traceback_utils_test PASSED in 11.2s //tensorflow/python/util:type_annotations_test PASSED in 11.8s //tensorflow/python/util:variable_utils_test PASSED in 11.5s //tensorflow/python/util:vlog_test PASSED in 11.1s //tensorflow/python/util/protobuf:protobuf_compare_test PASSED in 5.7s //tensorflow/tools/api/tests:module_test PASSED in 27.3s //tensorflow/tools/benchmark:benchmark_model_test PASSED in 1.9s //tensorflow/tools/common:public_api_test PASSED in 2.6s //tensorflow/tools/common:traverse_test PASSED in 2.8s //tensorflow/tools/compatibility:all_renames_v2_test PASSED in 9.7s //tensorflow/tools/compatibility:ast_edits_test PASSED in 10.1s //tensorflow/tools/compatibility:test_file_v1_0 PASSED in 25.1s //tensorflow/tools/compatibility:test_file_v2_0 PASSED in 24.3s //tensorflow/tools/compatibility:tf_upgrade_test PASSED in 9.6s //tensorflow/tools/compatibility:tf_upgrade_v2_safety_test PASSED in 10.3s //tensorflow/tools/docs:tf_doctest_test PASSED in 5.0s //tensorflow/tools/graph_transforms:file_utils_test PASSED in 0.4s //tensorflow/tools/graph_transforms:transform_graph_test PASSED in 1.8s //tensorflow/tools/graph_transforms:transform_utils_test PASSED in 2.0s //tensorflow/tools/graph_transforms:transforms_test PASSED in 2.6s //tensorflow/tools/proto_splitter:merge_test PASSED in 0.2s //tensorflow/tools/proto_splitter:split_graph_def_test PASSED in 17.6s //tensorflow/tools/proto_splitter:split_test PASSED in 17.5s //tensorflow/tools/proto_splitter:util_test PASSED in 16.8s //tensorflow/tools/proto_splitter/cc:composable_splitter_test PASSED in 0.2s //tensorflow/tools/proto_splitter/cc:graph_def_splitter_test PASSED in 0.2s //tensorflow/tools/proto_splitter/cc:saved_model_splitter_test PASSED in 0.2s //tensorflow/tools/proto_splitter/cc:util_test PASSED in 2.5s //tensorflow/tools/proto_splitter/python:saved_model_test PASSED in 9.3s //tensorflow/tools/proto_splitter/python:test_util_test PASSED in 9.9s //tensorflow/tools/proto_text:gen_proto_text_functions_lib_test PASSED in 0.1s //tensorflow/tools/tensorflow_builder/compat_checker:compat_checker_test PASSED in 0.3s //tensorflow/compiler/tests:complex_div_test_cpu PASSED in 13.3s Stats over 2 runs: max = 13.3s, min = 6.6s, avg = 9.9s, dev = 3.3s //tensorflow/compiler/tests:complex_div_test_cpu_mlir_bridge_test PASSED in 11.7s Stats over 2 runs: max = 11.7s, min = 5.1s, avg = 8.4s, dev = 3.3s //tensorflow/python/data/experimental/kernel_tests/optimization:optimization_test PASSED in 23.5s Stats over 2 runs: max = 23.5s, min = 10.1s, avg = 16.8s, dev = 6.7s //tensorflow/python/data/experimental/kernel_tests/service:metadata_test PASSED in 22.4s Stats over 2 runs: max = 22.4s, min = 16.0s, avg = 19.2s, dev = 3.2s //tensorflow/python/data/kernel_tests:padded_batch_test PASSED in 85.0s Stats over 2 runs: max = 85.0s, min = 55.3s, avg = 70.2s, dev = 14.9s //tensorflow/python/data/kernel_tests:repeat_test PASSED in 304.6s Stats over 2 runs: max = 304.6s, min = 296.3s, avg = 300.5s, dev = 4.1s //tensorflow/python/data/kernel_tests:window_test PASSED in 42.2s Stats over 2 runs: max = 42.2s, min = 36.5s, avg = 39.3s, dev = 2.9s //tensorflow/python/kernel_tests/array_ops:scatter_nd_ops_test_cpu PASSED in 40.3s Stats over 2 runs: max = 40.3s, min = 17.1s, avg = 28.7s, dev = 11.6s //tensorflow/python/kernel_tests/control_flow:functional_ops_test_cpu PASSED in 76.7s Stats over 2 runs: max = 76.7s, min = 34.2s, avg = 55.5s, dev = 21.3s //tensorflow/python/kernel_tests/control_flow:map_fn_test_cpu PASSED in 28.4s Stats over 2 runs: max = 28.4s, min = 15.0s, avg = 21.7s, dev = 6.7s //tensorflow/python/kernel_tests/nn_ops:atrous_conv2d_test_cpu PASSED in 49.5s Stats over 2 runs: max = 49.5s, min = 43.0s, avg = 46.3s, dev = 3.3s //tensorflow/python/kernel_tests/nn_ops:bias_op_d9m_test_cpu PASSED in 126.8s Stats over 2 runs: max = 126.8s, min = 43.5s, avg = 85.2s, dev = 41.7s //tensorflow/python/kernel_tests/nn_ops:conv2d_backprop_filter_grad_test_cpu PASSED in 23.6s Stats over 2 runs: max = 23.6s, min = 12.1s, avg = 17.9s, dev = 5.8s //tensorflow/python/kernel_tests/signal:fft_ops_test_cpu PASSED in 228.7s Stats over 2 runs: max = 228.7s, min = 143.8s, avg = 186.3s, dev = 42.5s //tensorflow/python/ops:control_flow_ops_test_cpu PASSED in 30.5s Stats over 2 runs: max = 30.5s, min = 26.3s, avg = 28.4s, dev = 2.1s //tensorflow/compiler/tests:spacetobatch_op_test_cpu PASSED in 15.9s Stats over 3 runs: max = 15.9s, min = 7.5s, avg = 11.3s, dev = 3.5s //tensorflow/compiler/tests:spacetobatch_op_test_cpu_mlir_bridge_test PASSED in 21.9s Stats over 3 runs: max = 21.9s, min = 10.5s, avg = 14.8s, dev = 5.0s //tensorflow/core/data/service:thread_safe_buffer_test PASSED in 0.1s Stats over 3 runs: max = 0.1s, min = 0.1s, avg = 0.1s, dev = 0.0s //tensorflow/python/data/experimental/kernel_tests/service:multi_process_cluster_test PASSED in 27.5s Stats over 3 runs: max = 27.5s, min = 17.0s, avg = 21.0s, dev = 4.7s //tensorflow/python/data/kernel_tests:unique_test PASSED in 15.3s Stats over 3 runs: max = 15.3s, min = 6.8s, avg = 9.9s, dev = 3.9s //tensorflow/python/distribute/coordinator:metric_utils_test PASSED in 25.7s Stats over 3 runs: max = 25.7s, min = 11.7s, avg = 17.5s, dev = 6.0s //tensorflow/python/kernel_tests/array_ops:gather_op_test_cpu PASSED in 143.3s Stats over 3 runs: max = 143.3s, min = 42.4s, avg = 79.1s, dev = 45.5s //tensorflow/python/kernel_tests/array_ops:weights_broadcast_test PASSED in 29.6s Stats over 3 runs: max = 29.6s, min = 8.3s, avg = 19.8s, dev = 8.8s //tensorflow/python/kernel_tests/distributions:util_test_cpu PASSED in 60.5s Stats over 3 runs: max = 60.5s, min = 9.8s, avg = 27.6s, dev = 23.3s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_grad_test_cpu PASSED in 32.6s Stats over 3 runs: max = 32.6s, min = 5.3s, avg = 18.4s, dev = 11.1s //tensorflow/python/kernel_tests/random:multinomial_op_big_test_cpu PASSED in 45.7s Stats over 3 runs: max = 45.7s, min = 26.3s, avg = 33.5s, dev = 8.7s //tensorflow/core/kernels:example_parsing_ops_test PASSED in 0.4s Stats over 4 runs: max = 0.4s, min = 0.3s, avg = 0.4s, dev = 0.0s //tensorflow/dtensor/python/tests:batchparallel_spmd_test_cpu PASSED in 17.0s Stats over 4 runs: max = 17.0s, min = 10.6s, avg = 12.9s, dev = 2.6s //tensorflow/dtensor/python/tests:conv_test_cpu PASSED in 15.0s Stats over 4 runs: max = 15.0s, min = 7.6s, avg = 10.8s, dev = 3.2s //tensorflow/dtensor/python/tests:sparse_test_cpu PASSED in 15.8s Stats over 4 runs: max = 15.8s, min = 7.4s, avg = 10.8s, dev = 3.5s //tensorflow/python/data/experimental/kernel_tests:auto_shard_dataset_test PASSED in 33.2s Stats over 4 runs: max = 33.2s, min = 13.7s, avg = 26.2s, dev = 7.4s //tensorflow/python/data/experimental/kernel_tests:from_list_test PASSED in 41.0s Stats over 4 runs: max = 41.0s, min = 33.0s, avg = 37.3s, dev = 3.7s //tensorflow/python/data/experimental/kernel_tests:map_and_batch_test PASSED in 41.6s Stats over 4 runs: max = 41.6s, min = 26.9s, avg = 31.7s, dev = 5.9s //tensorflow/python/data/experimental/kernel_tests:parse_example_dataset_test PASSED in 26.8s Stats over 4 runs: max = 26.8s, min = 12.0s, avg = 20.5s, dev = 5.9s //tensorflow/python/data/experimental/kernel_tests:rebatch_dataset_test PASSED in 19.3s Stats over 4 runs: max = 19.3s, min = 7.3s, avg = 12.7s, dev = 4.3s //tensorflow/python/data/experimental/kernel_tests:sql_dataset_test PASSED in 126.6s Stats over 4 runs: max = 126.6s, min = 47.4s, avg = 86.4s, dev = 30.7s //tensorflow/python/data/experimental/kernel_tests/service:cross_trainer_cache_ft_test PASSED in 13.4s Stats over 4 runs: max = 13.4s, min = 4.3s, avg = 6.8s, dev = 3.8s //tensorflow/python/data/kernel_tests:fixed_length_record_dataset_test PASSED in 110.9s Stats over 4 runs: max = 110.9s, min = 15.2s, avg = 59.7s, dev = 38.0s //tensorflow/python/data/kernel_tests:from_generator_test PASSED in 31.9s Stats over 4 runs: max = 31.9s, min = 11.8s, avg = 20.1s, dev = 7.4s //tensorflow/python/data/kernel_tests:from_tensor_slices_test PASSED in 173.0s Stats over 4 runs: max = 173.0s, min = 60.7s, avg = 113.4s, dev = 47.0s //tensorflow/python/data/kernel_tests:from_tensors_test PASSED in 130.9s Stats over 4 runs: max = 130.9s, min = 62.5s, avg = 81.7s, dev = 28.4s //tensorflow/python/data/kernel_tests:group_by_window_test PASSED in 76.3s Stats over 4 runs: max = 76.3s, min = 20.1s, avg = 36.3s, dev = 23.2s //tensorflow/python/data/kernel_tests:list_files_test PASSED in 105.3s Stats over 4 runs: max = 105.3s, min = 70.7s, avg = 85.8s, dev = 12.5s //tensorflow/python/data/kernel_tests:ragged_batch_test PASSED in 23.3s Stats over 4 runs: max = 23.3s, min = 15.9s, avg = 18.9s, dev = 2.7s //tensorflow/python/data/kernel_tests:take_test PASSED in 253.1s Stats over 4 runs: max = 253.1s, min = 124.9s, avg = 162.7s, dev = 52.9s //tensorflow/python/data/kernel_tests:take_while_test PASSED in 176.6s Stats over 4 runs: max = 176.6s, min = 102.7s, avg = 126.6s, dev = 29.4s //tensorflow/python/data/kernel_tests:text_line_dataset_test PASSED in 108.6s Stats over 4 runs: max = 108.6s, min = 36.1s, avg = 64.9s, dev = 28.2s //tensorflow/python/data/kernel_tests:zip_test PASSED in 63.9s Stats over 4 runs: max = 63.9s, min = 53.7s, avg = 58.0s, dev = 3.9s //tensorflow/python/debug/lib:dumping_callback_test_cpu PASSED in 51.2s Stats over 4 runs: max = 51.2s, min = 22.5s, avg = 30.6s, dev = 11.9s //tensorflow/python/distribute:cross_device_ops_test_cpu PASSED in 32.9s Stats over 4 runs: max = 32.9s, min = 21.3s, avg = 27.1s, dev = 5.1s //tensorflow/python/framework:convert_to_constants_test PASSED in 24.4s Stats over 4 runs: max = 24.4s, min = 12.7s, avg = 18.3s, dev = 4.3s //tensorflow/python/kernel_tests:collective_ops_test_cpu PASSED in 76.3s Stats over 4 runs: max = 76.3s, min = 38.8s, avg = 52.9s, dev = 14.3s //tensorflow/python/kernel_tests/array_ops:concat_op_test_cpu PASSED in 39.4s Stats over 4 runs: max = 39.4s, min = 17.2s, avg = 28.4s, dev = 10.9s //tensorflow/python/kernel_tests/array_ops:init_ops_test_cpu PASSED in 169.2s Stats over 4 runs: max = 169.2s, min = 23.1s, avg = 90.1s, dev = 54.3s //tensorflow/python/kernel_tests/array_ops:split_op_test_cpu PASSED in 55.1s Stats over 4 runs: max = 55.1s, min = 9.6s, avg = 33.8s, dev = 19.2s //tensorflow/python/kernel_tests/linalg:einsum_op_test_cpu PASSED in 145.2s Stats over 4 runs: max = 145.2s, min = 14.9s, avg = 76.0s, dev = 50.6s //tensorflow/python/kernel_tests/linalg:linear_operator_lower_triangular_test_cpu PASSED in 140.3s Stats over 4 runs: max = 140.3s, min = 74.6s, avg = 104.8s, dev = 25.0s //tensorflow/python/kernel_tests/nn_ops:conv_ops_test_cpu PASSED in 62.8s Stats over 4 runs: max = 62.8s, min = 41.4s, avg = 49.6s, dev = 8.1s //tensorflow/python/kernel_tests/random:random_gamma_test_cpu PASSED in 239.3s Stats over 4 runs: max = 239.3s, min = 9.3s, avg = 121.5s, dev = 109.8s //tensorflow/python/kernel_tests/signal:window_ops_test_cpu PASSED in 77.7s Stats over 4 runs: max = 77.7s, min = 41.1s, avg = 58.5s, dev = 13.9s //tensorflow/python/ops:nn_batchnorm_test_cpu PASSED in 41.5s Stats over 4 runs: max = 41.5s, min = 11.6s, avg = 20.0s, dev = 12.5s //tensorflow/python/ops:nn_fused_batchnorm_d9m_test_cpu PASSED in 42.1s Stats over 4 runs: max = 42.1s, min = 13.6s, avg = 21.7s, dev = 11.8s //tensorflow/python/ops/ragged:ragged_gather_op_test PASSED in 82.3s Stats over 4 runs: max = 82.3s, min = 26.0s, avg = 48.4s, dev = 20.8s //tensorflow/python/ops/ragged:ragged_getitem_test PASSED in 56.5s Stats over 4 runs: max = 56.5s, min = 47.6s, avg = 51.6s, dev = 3.7s //tensorflow/python/kernel_tests/linalg:matrix_triangular_solve_op_test_cpu FLAKY, failed in 1 out of 4 in 900.6s Stats over 4 runs: max = 900.6s, min = 13.2s, avg = 394.1s, dev = 381.0s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/kernel_tests/linalg/matrix_triangular_solve_op_test_cpu/shard_1_of_3/test_attempts/attempt_1.log //tensorflow/compiler/tests:conv3d_test_cpu PASSED in 20.4s Stats over 5 runs: max = 20.4s, min = 5.7s, avg = 11.8s, dev = 6.5s //tensorflow/compiler/tests:conv3d_test_cpu_mlir_bridge_test PASSED in 16.5s Stats over 5 runs: max = 16.5s, min = 6.8s, avg = 10.5s, dev = 3.9s //tensorflow/compiler/tests:depthwise_conv_op_test_cpu PASSED in 22.5s Stats over 5 runs: max = 22.5s, min = 6.3s, avg = 11.5s, dev = 5.7s //tensorflow/compiler/tests:depthwise_conv_op_test_cpu_mlir_bridge_test PASSED in 20.6s Stats over 5 runs: max = 20.6s, min = 7.1s, avg = 11.4s, dev = 4.9s //tensorflow/compiler/tests:fused_batchnorm_test_cpu PASSED in 12.1s Stats over 5 runs: max = 12.1s, min = 5.8s, avg = 7.5s, dev = 2.4s //tensorflow/compiler/tests:fused_batchnorm_test_cpu_mlir_bridge_test PASSED in 12.7s Stats over 5 runs: max = 12.7s, min = 5.7s, avg = 7.4s, dev = 2.7s //tensorflow/compiler/tests:reduce_ops_test_cpu PASSED in 19.2s Stats over 5 runs: max = 19.2s, min = 8.8s, avg = 12.9s, dev = 3.4s //tensorflow/compiler/tests:reduce_ops_test_cpu_mlir_bridge_test PASSED in 22.3s Stats over 5 runs: max = 22.3s, min = 12.0s, avg = 15.0s, dev = 3.8s //tensorflow/compiler/tests:special_math_test_cpu PASSED in 211.5s Stats over 5 runs: max = 211.5s, min = 37.1s, avg = 82.1s, dev = 65.5s //tensorflow/compiler/tests:special_math_test_cpu_mlir_bridge_test PASSED in 140.3s Stats over 5 runs: max = 140.3s, min = 32.7s, avg = 67.0s, dev = 37.8s //tensorflow/core/grappler/optimizers:constant_folding_test PASSED in 2.6s Stats over 5 runs: max = 2.6s, min = 2.0s, avg = 2.2s, dev = 0.3s //tensorflow/dtensor/python/tests:layout_propagation_test_cpu PASSED in 20.4s Stats over 5 runs: max = 20.4s, min = 6.2s, avg = 9.9s, dev = 5.4s //tensorflow/dtensor/python/tests:multi_mesh_test_cpu PASSED in 10.1s Stats over 5 runs: max = 10.1s, min = 5.8s, avg = 7.7s, dev = 1.9s //tensorflow/python/distribute:mirrored_strategy_test_2gpu PASSED in 14.7s Stats over 5 runs: max = 14.7s, min = 6.4s, avg = 9.7s, dev = 3.8s //tensorflow/python/distribute:mirrored_strategy_test_cpu PASSED in 17.8s Stats over 5 runs: max = 17.8s, min = 7.9s, avg = 13.3s, dev = 3.2s //tensorflow/python/eager:device_placement_test_cpu PASSED in 12.9s Stats over 5 runs: max = 12.9s, min = 6.0s, avg = 7.8s, dev = 2.6s //tensorflow/python/eager:forwardprop_test_cpu PASSED in 160.0s Stats over 5 runs: max = 160.0s, min = 13.3s, avg = 62.7s, dev = 51.7s //tensorflow/python/eager/polymorphic_function:gradients_test_cpu PASSED in 31.5s Stats over 5 runs: max = 31.5s, min = 10.6s, avg = 16.1s, dev = 7.8s //tensorflow/python/grappler:cluster_test_cpu PASSED in 27.1s Stats over 5 runs: max = 27.1s, min = 4.7s, avg = 9.6s, dev = 8.8s //tensorflow/python/kernel_tests/linalg:cholesky_op_test_cpu PASSED in 90.8s Stats over 5 runs: max = 90.8s, min = 38.3s, avg = 61.6s, dev = 16.9s //tensorflow/python/kernel_tests/linalg:linear_operator_adjoint_test_cpu PASSED in 101.9s Stats over 5 runs: max = 101.9s, min = 64.3s, avg = 88.0s, dev = 16.0s //tensorflow/python/kernel_tests/linalg:linear_operator_composition_test_cpu PASSED in 261.6s Stats over 5 runs: max = 261.6s, min = 231.5s, avg = 253.4s, dev = 11.5s //tensorflow/python/kernel_tests/linalg:linear_operator_diag_test_cpu PASSED in 135.8s Stats over 5 runs: max = 135.8s, min = 58.1s, avg = 95.9s, dev = 31.0s //tensorflow/python/kernel_tests/linalg:linear_operator_full_matrix_test_cpu PASSED in 143.4s Stats over 5 runs: max = 143.4s, min = 103.6s, avg = 124.9s, dev = 13.6s //tensorflow/python/kernel_tests/linalg:linear_operator_householder_test_cpu PASSED in 108.5s Stats over 5 runs: max = 108.5s, min = 69.0s, avg = 85.7s, dev = 14.8s //tensorflow/python/kernel_tests/linalg:linear_operator_identity_test_cpu PASSED in 207.4s Stats over 5 runs: max = 207.4s, min = 104.6s, avg = 151.3s, dev = 41.4s //tensorflow/python/kernel_tests/linalg:linear_operator_inversion_test_cpu PASSED in 117.3s Stats over 5 runs: max = 117.3s, min = 76.9s, avg = 93.8s, dev = 14.3s //tensorflow/python/kernel_tests/linalg:linear_operator_permutation_test_cpu PASSED in 91.8s Stats over 5 runs: max = 91.8s, min = 47.9s, avg = 72.7s, dev = 15.8s //tensorflow/python/kernel_tests/linalg:linear_operator_toeplitz_test_cpu PASSED in 135.4s Stats over 5 runs: max = 135.4s, min = 69.6s, avg = 101.1s, dev = 25.8s //tensorflow/python/kernel_tests/linalg:linear_operator_util_test_cpu PASSED in 29.4s Stats over 5 runs: max = 29.4s, min = 10.2s, avg = 16.7s, dev = 7.0s //tensorflow/python/kernel_tests/linalg:linear_operator_zeros_test_cpu PASSED in 104.8s Stats over 5 runs: max = 104.8s, min = 45.2s, avg = 80.5s, dev = 20.7s //tensorflow/python/kernel_tests/linalg:tridiagonal_matmul_op_test_cpu PASSED in 288.8s Stats over 5 runs: max = 288.8s, min = 7.8s, avg = 66.0s, dev = 111.4s //tensorflow/python/kernel_tests/nn_ops:fractional_avg_pool_op_test PASSED in 39.1s Stats over 5 runs: max = 39.1s, min = 6.8s, avg = 19.3s, dev = 13.2s //tensorflow/python/kernel_tests/nn_ops:fractional_max_pool_op_test PASSED in 39.8s Stats over 5 runs: max = 39.8s, min = 13.1s, avg = 25.9s, dev = 11.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_ops_test_cpu PASSED in 85.5s Stats over 5 runs: max = 85.5s, min = 11.3s, avg = 34.0s, dev = 27.8s //tensorflow/python/ops/parallel_for:math_test_cpu PASSED in 68.0s Stats over 5 runs: max = 68.0s, min = 31.3s, avg = 50.1s, dev = 12.0s //tensorflow/compiler/tests:scan_ops_test_cpu PASSED in 23.9s Stats over 6 runs: max = 23.9s, min = 10.7s, avg = 15.6s, dev = 4.3s //tensorflow/compiler/tests:scan_ops_test_cpu_mlir_bridge_test PASSED in 22.1s Stats over 6 runs: max = 22.1s, min = 13.5s, avg = 18.9s, dev = 3.0s //tensorflow/python/data/experimental/kernel_tests:make_batched_features_dataset_test PASSED in 26.5s Stats over 6 runs: max = 26.5s, min = 4.5s, avg = 13.4s, dev = 8.4s //tensorflow/python/kernel_tests/array_ops:diag_op_test_cpu PASSED in 126.3s Stats over 6 runs: max = 126.3s, min = 17.4s, avg = 39.7s, dev = 39.2s //tensorflow/python/kernel_tests/math_ops:reduction_ops_test_cpu PASSED in 129.7s Stats over 6 runs: max = 129.7s, min = 62.0s, avg = 101.3s, dev = 22.7s //tensorflow/python/distribute/experimental/rpc:rpc_ops_test PASSED in 15.5s Stats over 7 runs: max = 15.5s, min = 6.2s, avg = 8.7s, dev = 3.2s //tensorflow/compiler/tests:ftrl_test_cpu PASSED in 16.3s Stats over 8 runs: max = 16.3s, min = 6.3s, avg = 8.6s, dev = 3.0s //tensorflow/compiler/tests:matrix_diag_ops_test_cpu PASSED in 104.4s Stats over 8 runs: max = 104.4s, min = 3.8s, avg = 38.4s, dev = 35.6s //tensorflow/compiler/tests:matrix_diag_ops_test_cpu_mlir_bridge_test PASSED in 138.5s Stats over 8 runs: max = 138.5s, min = 3.7s, avg = 45.5s, dev = 46.0s //tensorflow/compiler/tests:ternary_ops_test_cpu PASSED in 43.4s Stats over 8 runs: max = 43.4s, min = 9.6s, avg = 22.6s, dev = 13.2s //tensorflow/compiler/tests:ternary_ops_test_cpu_mlir_bridge_test PASSED in 36.6s Stats over 8 runs: max = 36.6s, min = 8.7s, avg = 22.5s, dev = 10.2s //tensorflow/dtensor/python/tests:input_util_test PASSED in 26.7s Stats over 8 runs: max = 26.7s, min = 16.9s, avg = 22.1s, dev = 3.3s //tensorflow/dtensor/python/tests:save_restore_v2_test_cpu PASSED in 16.1s Stats over 8 runs: max = 16.1s, min = 8.2s, avg = 11.0s, dev = 3.0s //tensorflow/python/data/experimental/kernel_tests:csv_dataset_test PASSED in 36.4s Stats over 8 runs: max = 36.4s, min = 5.8s, avg = 16.0s, dev = 10.9s //tensorflow/python/data/experimental/kernel_tests:global_shuffle_test PASSED in 28.4s Stats over 8 runs: max = 28.4s, min = 17.4s, avg = 19.9s, dev = 3.3s //tensorflow/python/data/experimental/kernel_tests:index_flat_map_test PASSED in 91.0s Stats over 8 runs: max = 91.0s, min = 59.1s, avg = 72.9s, dev = 12.9s //tensorflow/python/data/experimental/kernel_tests:parallel_interleave_test PASSED in 30.1s Stats over 8 runs: max = 30.1s, min = 8.5s, avg = 21.2s, dev = 7.6s //tensorflow/python/data/experimental/kernel_tests/service:coordinated_read_ft_test PASSED in 36.9s Stats over 8 runs: max = 36.9s, min = 5.3s, avg = 20.5s, dev = 12.4s //tensorflow/python/data/experimental/kernel_tests/service:coordinated_read_test PASSED in 27.7s Stats over 8 runs: max = 27.7s, min = 5.1s, avg = 12.3s, dev = 8.6s //tensorflow/python/data/experimental/kernel_tests/service:cross_trainer_cache_test PASSED in 28.2s Stats over 8 runs: max = 28.2s, min = 4.2s, avg = 11.9s, dev = 8.3s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_load_ft_test PASSED in 26.3s Stats over 8 runs: max = 26.3s, min = 14.5s, avg = 18.7s, dev = 3.5s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_load_test PASSED in 375.1s Stats over 8 runs: max = 375.1s, min = 52.7s, avg = 123.6s, dev = 110.6s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_test PASSED in 38.9s Stats over 8 runs: max = 38.9s, min = 12.2s, avg = 22.8s, dev = 9.7s //tensorflow/python/data/experimental/kernel_tests/service:fault_tolerance_test PASSED in 16.3s Stats over 8 runs: max = 16.3s, min = 5.3s, avg = 9.3s, dev = 3.9s //tensorflow/python/data/kernel_tests:batch_test PASSED in 189.8s Stats over 8 runs: max = 189.8s, min = 76.6s, avg = 129.6s, dev = 39.6s //tensorflow/python/data/kernel_tests:filter_test PASSED in 178.7s Stats over 8 runs: max = 178.7s, min = 39.0s, avg = 80.5s, dev = 48.2s //tensorflow/python/data/kernel_tests:flat_map_test PASSED in 191.1s Stats over 8 runs: max = 191.1s, min = 30.9s, avg = 63.3s, dev = 49.1s //tensorflow/python/data/kernel_tests:shard_test PASSED in 136.8s Stats over 8 runs: max = 136.8s, min = 92.6s, avg = 107.8s, dev = 14.6s //tensorflow/python/data/kernel_tests:shuffle_test PASSED in 120.9s Stats over 8 runs: max = 120.9s, min = 82.5s, avg = 99.0s, dev = 12.1s //tensorflow/python/data/kernel_tests:skip_test PASSED in 106.6s Stats over 8 runs: max = 106.6s, min = 74.9s, avg = 91.1s, dev = 10.7s //tensorflow/python/data/kernel_tests:tf_record_dataset_test PASSED in 26.4s Stats over 8 runs: max = 26.4s, min = 11.7s, avg = 18.4s, dev = 4.2s //tensorflow/python/distribute/failure_handling:failure_handler_test PASSED in 85.7s Stats over 8 runs: max = 85.7s, min = 22.6s, avg = 49.6s, dev = 21.0s //tensorflow/python/distribute/failure_handling:gce_failure_handler_test PASSED in 91.3s Stats over 8 runs: max = 91.3s, min = 10.7s, avg = 37.3s, dev = 29.7s //tensorflow/python/kernel_tests/linalg:linalg_ops_test_cpu PASSED in 99.0s Stats over 8 runs: max = 99.0s, min = 41.9s, avg = 78.5s, dev = 17.6s //tensorflow/python/kernel_tests/linalg:linear_operator_block_diag_test_cpu PASSED in 459.2s Stats over 8 runs: max = 459.2s, min = 222.0s, avg = 332.8s, dev = 77.2s //tensorflow/python/kernel_tests/linalg:linear_operator_block_lower_triangular_test_cpu PASSED in 188.1s Stats over 8 runs: max = 188.1s, min = 128.2s, avg = 154.8s, dev = 20.1s //tensorflow/python/kernel_tests/nn_ops:depthwise_conv_op_d9m_test_cpu PASSED in 57.7s Stats over 8 runs: max = 57.7s, min = 3.6s, avg = 14.2s, dev = 17.8s //tensorflow/python/kernel_tests/nn_ops:depthwise_conv_op_test_cpu PASSED in 10.1s Stats over 8 runs: max = 10.1s, min = 3.6s, avg = 5.0s, dev = 2.0s //tensorflow/python/ops/ragged:dynamic_ragged_shape_test PASSED in 55.3s Stats over 8 runs: max = 55.3s, min = 30.6s, avg = 41.2s, dev = 8.2s //tensorflow/python/ops/ragged:ragged_tensor_test PASSED in 34.5s Stats over 8 runs: max = 34.5s, min = 9.8s, avg = 16.0s, dev = 7.9s //tensorflow/compiler/tests:conv2d_test_cpu PASSED in 13.5s Stats over 10 runs: max = 13.5s, min = 5.2s, avg = 7.2s, dev = 2.6s //tensorflow/compiler/tests:conv2d_test_cpu_mlir_bridge_test PASSED in 14.0s Stats over 10 runs: max = 14.0s, min = 5.1s, avg = 6.7s, dev = 2.5s //tensorflow/compiler/tests:random_ops_test_cpu PASSED in 11.1s Stats over 10 runs: max = 11.1s, min = 4.8s, avg = 8.2s, dev = 1.9s //tensorflow/compiler/tests:random_ops_test_cpu_mlir_bridge_test PASSED in 13.7s Stats over 10 runs: max = 13.7s, min = 4.1s, avg = 9.6s, dev = 2.6s //tensorflow/compiler/tests:stateful_random_ops_test_cpu PASSED in 19.9s Stats over 10 runs: max = 19.9s, min = 13.3s, avg = 15.8s, dev = 2.0s //tensorflow/compiler/tests:stateful_random_ops_test_cpu_mlir_bridge_test PASSED in 22.3s Stats over 10 runs: max = 22.3s, min = 13.6s, avg = 16.3s, dev = 3.0s //tensorflow/compiler/tests:stateless_random_ops_test_cpu PASSED in 81.6s Stats over 10 runs: max = 81.6s, min = 40.1s, avg = 59.1s, dev = 14.4s //tensorflow/compiler/tests:stateless_random_ops_test_cpu_mlir_bridge_test PASSED in 84.1s Stats over 10 runs: max = 84.1s, min = 43.8s, avg = 62.3s, dev = 12.8s //tensorflow/python/data/kernel_tests:rejection_resample_test PASSED in 19.2s Stats over 10 runs: max = 19.2s, min = 4.5s, avg = 10.2s, dev = 4.8s //tensorflow/python/distribute:input_lib_type_spec_test_2gpu PASSED in 18.1s Stats over 10 runs: max = 18.1s, min = 4.5s, avg = 12.0s, dev = 4.5s //tensorflow/python/distribute:input_lib_type_spec_test_cpu PASSED in 17.0s Stats over 10 runs: max = 17.0s, min = 5.5s, avg = 11.5s, dev = 4.2s //tensorflow/python/framework:function_test_cpu PASSED in 47.1s Stats over 10 runs: max = 47.1s, min = 5.2s, avg = 12.0s, dev = 12.3s //tensorflow/python/kernel_tests/array_ops:array_ops_test_cpu PASSED in 38.9s Stats over 10 runs: max = 38.9s, min = 10.6s, avg = 18.6s, dev = 7.3s //tensorflow/python/kernel_tests/array_ops:inplace_ops_test_cpu PASSED in 19.1s Stats over 10 runs: max = 19.1s, min = 5.4s, avg = 10.3s, dev = 4.3s //tensorflow/python/kernel_tests/data_structures:tensor_array_ops_test_cpu PASSED in 27.3s Stats over 10 runs: max = 27.3s, min = 4.8s, avg = 8.7s, dev = 6.3s //tensorflow/python/kernel_tests/linalg:linear_operator_tridiag_test_cpu PASSED in 286.4s Stats over 10 runs: max = 286.4s, min = 150.2s, avg = 199.2s, dev = 45.9s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_ops_test_cpu PASSED in 136.8s Stats over 10 runs: max = 136.8s, min = 14.8s, avg = 85.0s, dev = 39.7s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_sparse_mat_mul_grad_test_cpu PASSED in 24.6s Stats over 10 runs: max = 24.6s, min = 5.5s, avg = 10.5s, dev = 5.0s //tensorflow/python/kernel_tests/math_ops:cwise_ops_unary_test_cpu PASSED in 42.2s Stats over 10 runs: max = 42.2s, min = 8.0s, avg = 17.1s, dev = 9.1s //tensorflow/python/kernel_tests/math_ops:segment_reduction_ops_test_cpu PASSED in 56.0s Stats over 10 runs: max = 56.0s, min = 6.4s, avg = 27.8s, dev = 15.0s //tensorflow/python/kernel_tests/nn_ops:pooling_ops_test_cpu PASSED in 90.3s Stats over 10 runs: max = 90.3s, min = 10.0s, avg = 33.4s, dev = 26.5s //tensorflow/python/kernel_tests/nn_ops:rnn_test_cpu PASSED in 43.9s Stats over 10 runs: max = 43.9s, min = 8.9s, avg = 23.9s, dev = 9.4s //tensorflow/python/kernel_tests/random:random_index_shuffle_test PASSED in 30.4s Stats over 10 runs: max = 30.4s, min = 6.4s, avg = 17.4s, dev = 7.7s //tensorflow/python/kernel_tests/random:stateless_random_ops_test_cpu PASSED in 292.4s Stats over 10 runs: max = 292.4s, min = 67.7s, avg = 174.5s, dev = 85.6s //tensorflow/python/ops:special_math_ops_test_cpu PASSED in 59.7s Stats over 10 runs: max = 59.7s, min = 7.9s, avg = 19.4s, dev = 15.1s //tensorflow/python/ops:weak_tensor_special_math_ops_test_cpu PASSED in 32.9s Stats over 10 runs: max = 32.9s, min = 7.3s, avg = 13.6s, dev = 7.7s //tensorflow/python/ops/numpy_ops/tests:np_indexing_test PASSED in 124.0s Stats over 10 runs: max = 124.0s, min = 91.3s, avg = 106.0s, dev = 8.9s //tensorflow/python/ops/ragged:ragged_tensor_supported_values_test PASSED in 28.3s Stats over 10 runs: max = 28.3s, min = 13.9s, avg = 18.0s, dev = 4.3s //tensorflow/python/saved_model:load_test_cpu PASSED in 361.9s Stats over 10 runs: max = 361.9s, min = 198.0s, avg = 244.5s, dev = 42.0s //tensorflow/compiler/tests:fft_test_cpu PASSED in 19.2s Stats over 12 runs: max = 19.2s, min = 6.4s, avg = 13.1s, dev = 4.8s //tensorflow/python/data/experimental/kernel_tests:group_by_reducer_test PASSED in 17.2s Stats over 12 runs: max = 17.2s, min = 4.3s, avg = 9.7s, dev = 4.6s //tensorflow/python/data/kernel_tests:choose_from_datasets_test PASSED in 53.3s Stats over 12 runs: max = 53.3s, min = 5.1s, avg = 17.0s, dev = 13.1s //tensorflow/python/data/kernel_tests:memory_cleanup_test_cpu PASSED in 11.2s Stats over 12 runs: max = 11.2s, min = 4.5s, avg = 6.5s, dev = 1.9s //tensorflow/python/distribute:moving_averages_test_2gpu PASSED in 18.4s Stats over 12 runs: max = 18.4s, min = 9.9s, avg = 13.5s, dev = 2.3s //tensorflow/python/distribute:moving_averages_test_cpu PASSED in 26.8s Stats over 12 runs: max = 26.8s, min = 9.4s, avg = 13.7s, dev = 4.2s //tensorflow/python/eager/polymorphic_function:polymorphic_function_test_cpu PASSED in 54.6s Stats over 15 runs: max = 54.6s, min = 12.3s, avg = 20.2s, dev = 11.1s //tensorflow/python/kernel_tests/linalg:linear_operator_low_rank_update_test_cpu PASSED in 301.8s Stats over 15 runs: max = 301.8s, min = 178.7s, avg = 251.9s, dev = 41.0s //tensorflow/python/kernel_tests/nn_ops:rnn_cell_test_cpu PASSED in 162.1s Stats over 15 runs: max = 162.1s, min = 9.3s, avg = 36.8s, dev = 39.5s //tensorflow/python/data/experimental/kernel_tests/service:dynamic_sharding_test PASSED in 14.2s Stats over 16 runs: max = 14.2s, min = 4.6s, avg = 8.8s, dev = 3.0s //tensorflow/python/data/kernel_tests:snapshot_test PASSED in 28.7s Stats over 16 runs: max = 28.7s, min = 8.9s, avg = 18.1s, dev = 5.4s //tensorflow/python/kernel_tests/control_flow:control_flow_ops_py_test_cpu PASSED in 50.7s Stats over 16 runs: max = 50.7s, min = 6.5s, avg = 21.3s, dev = 11.8s //tensorflow/python/kernel_tests/linalg:matrix_exponential_op_test PASSED in 32.1s Stats over 16 runs: max = 32.1s, min = 5.7s, avg = 13.9s, dev = 7.9s //tensorflow/python/kernel_tests/signal:dct_ops_test_cpu PASSED in 39.4s Stats over 16 runs: max = 39.4s, min = 19.8s, avg = 28.5s, dev = 5.3s //tensorflow/python/ops:image_ops_test_cpu PASSED in 20.8s Stats over 16 runs: max = 20.8s, min = 6.9s, avg = 13.2s, dev = 3.7s //tensorflow/python/data/kernel_tests:map_test PASSED in 167.1s Stats over 19 runs: max = 167.1s, min = 66.6s, avg = 105.1s, dev = 21.9s //tensorflow/compiler/tests:pooling_ops_3d_test_cpu PASSED in 13.3s Stats over 20 runs: max = 13.3s, min = 4.2s, avg = 6.0s, dev = 1.8s //tensorflow/compiler/tests:pooling_ops_3d_test_cpu_mlir_bridge_test PASSED in 11.8s Stats over 20 runs: max = 11.8s, min = 4.1s, avg = 6.0s, dev = 1.6s //tensorflow/compiler/tests:pooling_ops_test_cpu PASSED in 15.2s Stats over 20 runs: max = 15.2s, min = 4.8s, avg = 6.9s, dev = 2.5s //tensorflow/compiler/tests:pooling_ops_test_cpu_mlir_bridge_test PASSED in 22.0s Stats over 20 runs: max = 22.0s, min = 3.7s, avg = 6.7s, dev = 4.0s //tensorflow/compiler/tests:stochastic_cast_op_test_cpu PASSED in 15.4s Stats over 20 runs: max = 15.4s, min = 5.5s, avg = 7.9s, dev = 2.7s //tensorflow/compiler/tests:unary_ops_test_cpu PASSED in 35.3s Stats over 20 runs: max = 35.3s, min = 4.1s, avg = 10.7s, dev = 8.5s //tensorflow/compiler/tests:unary_ops_test_cpu_mlir_bridge_test PASSED in 561.3s Stats over 20 runs: max = 561.3s, min = 5.8s, avg = 417.4s, dev = 147.3s //tensorflow/dtensor/python/tests:rng_test_cpu PASSED in 17.3s Stats over 20 runs: max = 17.3s, min = 7.4s, avg = 9.0s, dev = 2.1s //tensorflow/python/autograph/tests:loop_control_flow_test PASSED in 98.9s Stats over 20 runs: max = 98.9s, min = 14.4s, avg = 32.2s, dev = 16.6s //tensorflow/python/kernel_tests:metrics_test PASSED in 106.1s Stats over 20 runs: max = 106.1s, min = 16.8s, avg = 40.2s, dev = 24.8s //tensorflow/python/kernel_tests/array_ops:matrix_band_part_op_test_cpu PASSED in 29.3s Stats over 20 runs: max = 29.3s, min = 4.2s, avg = 10.2s, dev = 6.1s //tensorflow/python/kernel_tests/data_structures:barrier_ops_test PASSED in 14.2s Stats over 20 runs: max = 14.2s, min = 4.0s, avg = 6.3s, dev = 3.2s //tensorflow/python/kernel_tests/linalg:eig_op_test PASSED in 95.4s Stats over 20 runs: max = 95.4s, min = 5.3s, avg = 25.7s, dev = 26.7s //tensorflow/python/kernel_tests/linalg:linalg_grad_test_cpu PASSED in 277.3s Stats over 20 runs: max = 277.3s, min = 72.4s, avg = 157.9s, dev = 62.1s //tensorflow/python/kernel_tests/linalg:norm_op_test_cpu PASSED in 27.7s Stats over 20 runs: max = 27.7s, min = 6.3s, avg = 11.2s, dev = 4.9s //tensorflow/python/kernel_tests/linalg:normalize_op_test_cpu PASSED in 41.7s Stats over 20 runs: max = 41.7s, min = 8.9s, avg = 22.7s, dev = 9.7s //tensorflow/python/kernel_tests/linalg:qr_op_test_cpu PASSED in 472.0s Stats over 20 runs: max = 472.0s, min = 63.2s, avg = 225.4s, dev = 130.7s //tensorflow/python/kernel_tests/linalg:self_adjoint_eig_op_test_cpu PASSED in 49.9s Stats over 20 runs: max = 49.9s, min = 6.3s, avg = 20.2s, dev = 11.9s //tensorflow/python/kernel_tests/math_ops:batch_matmul_op_test_cpu PASSED in 45.1s Stats over 20 runs: max = 45.1s, min = 14.7s, avg = 30.4s, dev = 10.1s //tensorflow/python/kernel_tests/math_ops:matmul_op_test_cpu PASSED in 76.2s Stats over 20 runs: max = 76.2s, min = 31.2s, avg = 49.0s, dev = 13.1s //tensorflow/python/kernel_tests/math_ops:tensordot_op_test_cpu PASSED in 214.7s Stats over 20 runs: max = 214.7s, min = 16.7s, avg = 74.3s, dev = 56.0s //tensorflow/python/kernel_tests/nn_ops:embedding_ops_test_cpu PASSED in 99.9s Stats over 20 runs: max = 99.9s, min = 16.0s, avg = 38.9s, dev = 17.8s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_ft_test FLAKY, failed in 4 out of 21 in 900.9s Stats over 21 runs: max = 900.9s, min = 10.8s, avg = 396.7s, dev = 332.3s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test/shard_2_of_17/test_attempts/attempt_1.log /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test/shard_13_of_17/test_attempts/attempt_1.log /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test/shard_1_of_17/test_attempts/attempt_1.log /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/data/experimental/kernel_tests/service/distributed_save_ft_test/shard_12_of_17/test_attempts/attempt_1.log //tensorflow/python/data/kernel_tests:interleave_test PASSED in 68.3s Stats over 24 runs: max = 68.3s, min = 13.7s, avg = 38.7s, dev = 15.0s //tensorflow/python/data/kernel_tests:sample_from_datasets_test PASSED in 69.1s Stats over 24 runs: max = 69.1s, min = 7.4s, avg = 21.0s, dev = 13.4s //tensorflow/dtensor/python/tests:multi_device_spmd_test_cpu PASSED in 40.7s Stats over 25 runs: max = 40.7s, min = 25.0s, avg = 29.5s, dev = 3.0s //tensorflow/python/kernel_tests/nn_ops:conv_ops_3d_test_cpu PASSED in 101.9s Stats over 30 runs: max = 101.9s, min = 4.7s, avg = 29.6s, dev = 19.9s //tensorflow/python/data/experimental/kernel_tests/service:data_service_ops_test PASSED in 22.2s Stats over 32 runs: max = 22.2s, min = 4.3s, avg = 9.5s, dev = 4.1s //tensorflow/python/data/experimental/kernel_tests/service:worker_tags_test PASSED in 21.2s Stats over 32 runs: max = 21.2s, min = 4.1s, avg = 10.6s, dev = 4.3s //tensorflow/python/distribute:multi_process_runner_test_2gpu PASSED in 219.1s Stats over 35 runs: max = 219.1s, min = 4.8s, avg = 24.7s, dev = 38.7s //tensorflow/python/distribute:multi_process_runner_test_cpu PASSED in 218.8s Stats over 35 runs: max = 218.8s, min = 4.0s, avg = 25.4s, dev = 39.4s //tensorflow/core/kernels:stochastic_cast_op_test PASSED in 1.7s Stats over 48 runs: max = 1.7s, min = 0.3s, avg = 0.5s, dev = 0.3s //tensorflow/compiler/mlir/quantization/tensorflow/python:quantize_model_test PASSED in 361.9s Stats over 50 runs: max = 361.9s, min = 75.1s, avg = 124.3s, dev = 56.8s //tensorflow/compiler/tests:sort_ops_test_cpu PASSED in 50.0s Stats over 50 runs: max = 50.0s, min = 4.5s, avg = 16.2s, dev = 9.0s //tensorflow/compiler/tests:sort_ops_test_cpu_mlir_bridge_test PASSED in 29.6s Stats over 50 runs: max = 29.6s, min = 4.0s, avg = 12.9s, dev = 5.8s //tensorflow/python/kernel_tests/linalg:linear_operator_circulant_test_cpu PASSED in 203.0s Stats over 50 runs: max = 203.0s, min = 79.3s, avg = 134.8s, dev = 24.4s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_dense_mat_mul_grad_test_cpu PASSED in 41.2s Stats over 50 runs: max = 41.2s, min = 6.7s, avg = 19.7s, dev = 8.5s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_dense_mat_mul_onednn_grad_test PASSED in 48.4s Stats over 50 runs: max = 48.4s, min = 7.9s, avg = 19.8s, dev = 9.0s //tensorflow/python/kernel_tests/math_ops:cwise_ops_binary_test_cpu PASSED in 88.8s Stats over 50 runs: max = 88.8s, min = 12.6s, avg = 40.5s, dev = 18.8s //tensorflow/python/kernel_tests/math_ops:cwise_ops_test_cpu PASSED in 37.6s Stats over 50 runs: max = 37.6s, min = 4.1s, avg = 12.2s, dev = 7.2s Executed 3083 out of 3083 tests: 3083 tests pass. There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.